Dec 05 01:08:20 crc systemd[1]: Starting Kubernetes Kubelet... Dec 05 01:08:20 crc restorecon[4749]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:20 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 01:08:21 crc restorecon[4749]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 01:08:21 crc restorecon[4749]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 05 01:08:21 crc kubenswrapper[4990]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 05 01:08:21 crc kubenswrapper[4990]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 05 01:08:21 crc kubenswrapper[4990]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 05 01:08:21 crc kubenswrapper[4990]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 05 01:08:21 crc kubenswrapper[4990]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 05 01:08:21 crc kubenswrapper[4990]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.765511 4990 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.769799 4990 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.769892 4990 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.769902 4990 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.769909 4990 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.769916 4990 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.769923 4990 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.769930 4990 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.769936 4990 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.769942 4990 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.769948 4990 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.769957 4990 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.769965 4990 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.769973 4990 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.769980 4990 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.769986 4990 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.769992 4990 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.769998 4990 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770004 4990 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770010 4990 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770016 4990 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770022 4990 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770027 4990 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770033 4990 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770038 4990 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770044 4990 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770050 4990 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770055 4990 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770061 4990 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770066 4990 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770071 4990 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770077 4990 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770082 4990 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770088 4990 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770094 4990 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770099 4990 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770105 4990 feature_gate.go:330] unrecognized feature gate: Example Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770110 4990 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770115 4990 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770121 4990 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770127 4990 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770133 4990 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770138 4990 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770143 4990 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770150 4990 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770156 4990 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770160 4990 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770167 4990 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770172 4990 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770179 4990 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770187 4990 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770194 4990 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770257 4990 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770742 4990 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770827 4990 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770838 4990 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770862 4990 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770872 4990 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770883 4990 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770891 4990 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770900 4990 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770908 4990 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770917 4990 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770925 4990 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770949 4990 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770959 4990 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770967 4990 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770978 4990 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770987 4990 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.770995 4990 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.771003 4990 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.771012 4990 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771357 4990 flags.go:64] FLAG: --address="0.0.0.0" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771390 4990 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771421 4990 flags.go:64] FLAG: --anonymous-auth="true" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771441 4990 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771457 4990 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771469 4990 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771531 4990 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771548 4990 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771558 4990 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771569 4990 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771580 4990 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771591 4990 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771601 4990 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771611 4990 flags.go:64] FLAG: --cgroup-root="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771622 4990 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771640 4990 flags.go:64] FLAG: --client-ca-file="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771650 4990 flags.go:64] FLAG: --cloud-config="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771666 4990 flags.go:64] FLAG: --cloud-provider="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771677 4990 flags.go:64] FLAG: --cluster-dns="[]" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771691 4990 flags.go:64] FLAG: --cluster-domain="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771702 4990 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771713 4990 flags.go:64] FLAG: --config-dir="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771725 4990 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771738 4990 flags.go:64] FLAG: --container-log-max-files="5" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771760 4990 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771772 4990 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771783 4990 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771794 4990 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771805 4990 flags.go:64] FLAG: --contention-profiling="false" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771816 4990 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771826 4990 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771837 4990 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771854 4990 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771870 4990 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771881 4990 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771892 4990 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771903 4990 flags.go:64] FLAG: --enable-load-reader="false" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771914 4990 flags.go:64] FLAG: --enable-server="true" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771927 4990 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771946 4990 flags.go:64] FLAG: --event-burst="100" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771963 4990 flags.go:64] FLAG: --event-qps="50" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771973 4990 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771984 4990 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.771995 4990 flags.go:64] FLAG: --eviction-hard="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772008 4990 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772018 4990 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772028 4990 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772039 4990 flags.go:64] FLAG: --eviction-soft="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772049 4990 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772065 4990 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772076 4990 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772086 4990 flags.go:64] FLAG: --experimental-mounter-path="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772099 4990 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772113 4990 flags.go:64] FLAG: --fail-swap-on="true" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772123 4990 flags.go:64] FLAG: --feature-gates="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772138 4990 flags.go:64] FLAG: --file-check-frequency="20s" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772150 4990 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772237 4990 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772248 4990 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772259 4990 flags.go:64] FLAG: --healthz-port="10248" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772271 4990 flags.go:64] FLAG: --help="false" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772284 4990 flags.go:64] FLAG: --hostname-override="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772295 4990 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772306 4990 flags.go:64] FLAG: --http-check-frequency="20s" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772318 4990 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772337 4990 flags.go:64] FLAG: --image-credential-provider-config="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772347 4990 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772357 4990 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772371 4990 flags.go:64] FLAG: --image-service-endpoint="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772386 4990 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772399 4990 flags.go:64] FLAG: --kube-api-burst="100" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772412 4990 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772424 4990 flags.go:64] FLAG: --kube-api-qps="50" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772438 4990 flags.go:64] FLAG: --kube-reserved="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772460 4990 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772474 4990 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772914 4990 flags.go:64] FLAG: --kubelet-cgroups="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772936 4990 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772951 4990 flags.go:64] FLAG: --lock-file="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772962 4990 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772973 4990 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.772983 4990 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773000 4990 flags.go:64] FLAG: --log-json-split-stream="false" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773010 4990 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773019 4990 flags.go:64] FLAG: --log-text-split-stream="false" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773029 4990 flags.go:64] FLAG: --logging-format="text" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773039 4990 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773051 4990 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773062 4990 flags.go:64] FLAG: --manifest-url="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773073 4990 flags.go:64] FLAG: --manifest-url-header="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773090 4990 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773101 4990 flags.go:64] FLAG: --max-open-files="1000000" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773113 4990 flags.go:64] FLAG: --max-pods="110" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773124 4990 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773135 4990 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773147 4990 flags.go:64] FLAG: --memory-manager-policy="None" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773157 4990 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773167 4990 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773176 4990 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773186 4990 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773224 4990 flags.go:64] FLAG: --node-status-max-images="50" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773234 4990 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773244 4990 flags.go:64] FLAG: --oom-score-adj="-999" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773253 4990 flags.go:64] FLAG: --pod-cidr="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773263 4990 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773279 4990 flags.go:64] FLAG: --pod-manifest-path="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773288 4990 flags.go:64] FLAG: --pod-max-pids="-1" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773298 4990 flags.go:64] FLAG: --pods-per-core="0" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773307 4990 flags.go:64] FLAG: --port="10250" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773316 4990 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773326 4990 flags.go:64] FLAG: --provider-id="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773338 4990 flags.go:64] FLAG: --qos-reserved="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773348 4990 flags.go:64] FLAG: --read-only-port="10255" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773357 4990 flags.go:64] FLAG: --register-node="true" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773367 4990 flags.go:64] FLAG: --register-schedulable="true" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773380 4990 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773404 4990 flags.go:64] FLAG: --registry-burst="10" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773416 4990 flags.go:64] FLAG: --registry-qps="5" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773428 4990 flags.go:64] FLAG: --reserved-cpus="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773452 4990 flags.go:64] FLAG: --reserved-memory="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773467 4990 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773477 4990 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773530 4990 flags.go:64] FLAG: --rotate-certificates="false" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773547 4990 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773559 4990 flags.go:64] FLAG: --runonce="false" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773568 4990 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773579 4990 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773589 4990 flags.go:64] FLAG: --seccomp-default="false" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773599 4990 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773804 4990 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773814 4990 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773824 4990 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773833 4990 flags.go:64] FLAG: --storage-driver-password="root" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773842 4990 flags.go:64] FLAG: --storage-driver-secure="false" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773851 4990 flags.go:64] FLAG: --storage-driver-table="stats" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773861 4990 flags.go:64] FLAG: --storage-driver-user="root" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773871 4990 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773881 4990 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773890 4990 flags.go:64] FLAG: --system-cgroups="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773899 4990 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773914 4990 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773923 4990 flags.go:64] FLAG: --tls-cert-file="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773933 4990 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773946 4990 flags.go:64] FLAG: --tls-min-version="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773956 4990 flags.go:64] FLAG: --tls-private-key-file="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773965 4990 flags.go:64] FLAG: --topology-manager-policy="none" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773974 4990 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773984 4990 flags.go:64] FLAG: --topology-manager-scope="container" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.773993 4990 flags.go:64] FLAG: --v="2" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.774007 4990 flags.go:64] FLAG: --version="false" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.774021 4990 flags.go:64] FLAG: --vmodule="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.774033 4990 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.774043 4990 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774341 4990 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774355 4990 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774363 4990 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774374 4990 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774386 4990 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774401 4990 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774415 4990 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774429 4990 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774442 4990 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774453 4990 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774464 4990 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774475 4990 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774513 4990 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774522 4990 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774531 4990 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774540 4990 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774548 4990 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774575 4990 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774583 4990 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774591 4990 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774601 4990 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774609 4990 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774616 4990 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774625 4990 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774633 4990 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774641 4990 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774648 4990 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774656 4990 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774664 4990 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774672 4990 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774679 4990 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774687 4990 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774728 4990 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774737 4990 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774744 4990 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774752 4990 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774763 4990 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774774 4990 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774783 4990 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774796 4990 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774807 4990 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774816 4990 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774825 4990 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774834 4990 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774842 4990 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.774850 4990 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.775048 4990 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.775056 4990 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.775064 4990 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.775072 4990 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.775079 4990 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.775088 4990 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.775095 4990 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.775117 4990 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.775125 4990 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.775142 4990 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.775150 4990 feature_gate.go:330] unrecognized feature gate: Example Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.775158 4990 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.775166 4990 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.775174 4990 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.775181 4990 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.775190 4990 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.775198 4990 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.775206 4990 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.775214 4990 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.775223 4990 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.775234 4990 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.775244 4990 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.775252 4990 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.775260 4990 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.775268 4990 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.775304 4990 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.787411 4990 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.787465 4990 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787630 4990 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787649 4990 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787658 4990 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787666 4990 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787678 4990 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787689 4990 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787698 4990 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787706 4990 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787715 4990 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787724 4990 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787732 4990 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787741 4990 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787749 4990 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787760 4990 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787769 4990 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787779 4990 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787788 4990 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787796 4990 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787805 4990 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787813 4990 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787821 4990 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787829 4990 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787837 4990 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787845 4990 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787853 4990 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787861 4990 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787869 4990 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787877 4990 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787885 4990 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787894 4990 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787902 4990 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787910 4990 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787919 4990 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787927 4990 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787939 4990 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787947 4990 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787955 4990 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787964 4990 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787974 4990 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787984 4990 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.787993 4990 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788006 4990 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788017 4990 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788028 4990 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788037 4990 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788046 4990 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788055 4990 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788064 4990 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788073 4990 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788081 4990 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788089 4990 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788097 4990 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788105 4990 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788113 4990 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788121 4990 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788129 4990 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788139 4990 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788149 4990 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788157 4990 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788165 4990 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788173 4990 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788181 4990 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788190 4990 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788198 4990 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788207 4990 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788216 4990 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788224 4990 feature_gate.go:330] unrecognized feature gate: Example Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788233 4990 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788242 4990 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788250 4990 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788261 4990 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.788276 4990 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788571 4990 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788588 4990 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788599 4990 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788608 4990 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788617 4990 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788626 4990 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788635 4990 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788643 4990 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788652 4990 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788662 4990 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788673 4990 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788684 4990 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788695 4990 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788703 4990 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788714 4990 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788723 4990 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788733 4990 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788741 4990 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788749 4990 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788757 4990 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788765 4990 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788773 4990 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788781 4990 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788792 4990 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788801 4990 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788810 4990 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788818 4990 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788827 4990 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788836 4990 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788843 4990 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788852 4990 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788861 4990 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788871 4990 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788880 4990 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788889 4990 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788897 4990 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788906 4990 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788914 4990 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788922 4990 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788931 4990 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788939 4990 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788947 4990 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788956 4990 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788964 4990 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788973 4990 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788981 4990 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788989 4990 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.788998 4990 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.789006 4990 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.789015 4990 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.789024 4990 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.789031 4990 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.789040 4990 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.789051 4990 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.789061 4990 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.789071 4990 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.789080 4990 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.789090 4990 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.789099 4990 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.789108 4990 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.789116 4990 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.789124 4990 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.789133 4990 feature_gate.go:330] unrecognized feature gate: Example Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.789143 4990 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.789151 4990 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.789160 4990 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.789170 4990 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.789181 4990 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.789191 4990 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.789200 4990 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.789210 4990 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.789222 4990 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.789800 4990 server.go:940] "Client rotation is on, will bootstrap in background" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.794170 4990 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.794316 4990 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.795227 4990 server.go:997] "Starting client certificate rotation" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.795273 4990 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.795505 4990 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-14 13:49:14.153910033 +0000 UTC Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.796076 4990 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 228h40m52.357837199s for next certificate rotation Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.803317 4990 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.806032 4990 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.815172 4990 log.go:25] "Validated CRI v1 runtime API" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.828372 4990 log.go:25] "Validated CRI v1 image API" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.829691 4990 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.832108 4990 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-05-01-03-45-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.832154 4990 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.849166 4990 manager.go:217] Machine: {Timestamp:2025-12-05 01:08:21.84757462 +0000 UTC m=+0.223790021 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:ce964c17-1cf3-4471-84ac-c2fc1079c2f2 BootID:2415bd45-5145-44bb-b5a4-8197e19c19f6 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:4d:2a:e2 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:4d:2a:e2 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:26:8d:11 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:65:d6:1e Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:14:d6:8a Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:7f:c4:f8 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:37:8c:51 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:92:d5:2c:a8:42:50 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:82:92:0b:c2:14:7d Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.849543 4990 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.849758 4990 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.850670 4990 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.850951 4990 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.851002 4990 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.851333 4990 topology_manager.go:138] "Creating topology manager with none policy" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.851347 4990 container_manager_linux.go:303] "Creating device plugin manager" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.851627 4990 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.851667 4990 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.851916 4990 state_mem.go:36] "Initialized new in-memory state store" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.852036 4990 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.853182 4990 kubelet.go:418] "Attempting to sync node with API server" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.853217 4990 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.853260 4990 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.853278 4990 kubelet.go:324] "Adding apiserver pod source" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.853294 4990 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.856231 4990 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.856692 4990 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.857734 4990 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.858063 4990 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.858058 4990 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.858449 4990 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.858482 4990 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.858507 4990 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.858516 4990 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.858534 4990 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.858546 4990 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.858556 4990 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.858572 4990 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.858584 4990 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.858597 4990 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.858611 4990 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.858622 4990 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 05 01:08:21 crc kubenswrapper[4990]: E1205 01:08:21.858580 4990 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Dec 05 01:08:21 crc kubenswrapper[4990]: E1205 01:08:21.858601 4990 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.859053 4990 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.859660 4990 server.go:1280] "Started kubelet" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.860329 4990 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.860925 4990 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.861617 4990 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 05 01:08:21 crc systemd[1]: Started Kubernetes Kubelet. Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.862848 4990 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.864147 4990 server.go:460] "Adding debug handlers to kubelet server" Dec 05 01:08:21 crc kubenswrapper[4990]: E1205 01:08:21.863787 4990 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.145:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187e2c61a487cd60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-05 01:08:21.859626336 +0000 UTC m=+0.235841707,LastTimestamp:2025-12-05 01:08:21.859626336 +0000 UTC m=+0.235841707,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.864887 4990 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.865000 4990 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.865056 4990 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 06:04:06.533657886 +0000 UTC Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.865273 4990 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.865307 4990 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.865317 4990 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 05 01:08:21 crc kubenswrapper[4990]: E1205 01:08:21.865229 4990 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.866872 4990 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Dec 05 01:08:21 crc kubenswrapper[4990]: E1205 01:08:21.867020 4990 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.867500 4990 factory.go:55] Registering systemd factory Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.867584 4990 factory.go:221] Registration of the systemd container factory successfully Dec 05 01:08:21 crc kubenswrapper[4990]: E1205 01:08:21.867570 4990 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="200ms" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.868305 4990 factory.go:153] Registering CRI-O factory Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.868374 4990 factory.go:221] Registration of the crio container factory successfully Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.868478 4990 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.868566 4990 factory.go:103] Registering Raw factory Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.868638 4990 manager.go:1196] Started watching for new ooms in manager Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.877159 4990 manager.go:319] Starting recovery of all containers Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.885877 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.885956 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.885978 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886002 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886022 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886039 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886058 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886078 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886100 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886118 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886136 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886154 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886171 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886224 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886242 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886258 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886279 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886297 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886316 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886398 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886418 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886436 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886454 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886472 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886519 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886540 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886561 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886583 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886604 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886622 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886649 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886667 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886688 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886707 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886727 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886747 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886765 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886785 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886805 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886824 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886847 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886867 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886887 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886906 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886925 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886944 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886964 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.886985 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887006 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887026 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887045 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887065 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887090 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887111 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887132 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887152 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887178 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887198 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887217 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887236 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887256 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887275 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887293 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887313 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887334 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887387 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887412 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887434 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887451 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887470 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887515 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887536 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887554 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887575 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887594 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887613 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887639 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887665 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887684 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887702 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887727 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887747 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887766 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887783 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887801 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887825 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887843 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887862 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887917 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887938 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887960 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.887989 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888012 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888044 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888078 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888105 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888129 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888151 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888170 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888190 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888208 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888228 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888247 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888265 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888295 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888317 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888338 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888359 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888381 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888402 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888422 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888444 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888467 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888518 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888540 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888559 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888580 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888601 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888621 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888642 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888661 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888681 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888701 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888721 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888741 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888759 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888777 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888799 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888818 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888836 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888857 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.888936 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.889026 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.889050 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.889073 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.889092 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.889122 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.889141 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.889167 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.889189 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.890568 4990 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.890717 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.890757 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.890832 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.890899 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.890924 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.891047 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.891109 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.891128 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.891186 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.891209 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.891230 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.891284 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.891304 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.891422 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.891475 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.891533 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.891555 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.891649 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.891677 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.891727 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.891746 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.891764 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.891827 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.892835 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.892893 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.892921 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.892950 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.892974 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.892997 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893019 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893041 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893065 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893092 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893130 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893160 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893187 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893210 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893235 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893260 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893283 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893307 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893332 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893355 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893377 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893398 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893422 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893446 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893469 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893524 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893545 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893570 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893593 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893616 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893682 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893709 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893735 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893755 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893777 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893797 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893821 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893845 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893867 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893887 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893908 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893928 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893950 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893972 4990 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.893990 4990 reconstruct.go:97] "Volume reconstruction finished" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.894003 4990 reconciler.go:26] "Reconciler: start to sync state" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.901629 4990 manager.go:324] Recovery completed Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.911993 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.913977 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.914034 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.914046 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.916190 4990 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.916215 4990 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.916238 4990 state_mem.go:36] "Initialized new in-memory state store" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.925935 4990 policy_none.go:49] "None policy: Start" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.926808 4990 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.927390 4990 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.927422 4990 state_mem.go:35] "Initializing new in-memory state store" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.929051 4990 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.929094 4990 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.929122 4990 kubelet.go:2335] "Starting kubelet main sync loop" Dec 05 01:08:21 crc kubenswrapper[4990]: E1205 01:08:21.929173 4990 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 05 01:08:21 crc kubenswrapper[4990]: W1205 01:08:21.929728 4990 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Dec 05 01:08:21 crc kubenswrapper[4990]: E1205 01:08:21.929767 4990 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Dec 05 01:08:21 crc kubenswrapper[4990]: E1205 01:08:21.965424 4990 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.975293 4990 manager.go:334] "Starting Device Plugin manager" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.975442 4990 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.975458 4990 server.go:79] "Starting device plugin registration server" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.975939 4990 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.975955 4990 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.976334 4990 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.976454 4990 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 05 01:08:21 crc kubenswrapper[4990]: I1205 01:08:21.976466 4990 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 05 01:08:21 crc kubenswrapper[4990]: E1205 01:08:21.988189 4990 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.029620 4990 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.029709 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.030681 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.030733 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.030747 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.030935 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.031075 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.031134 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.031908 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.031946 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.031958 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.032076 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.032088 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.032123 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.032142 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.032248 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.032290 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.033078 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.033115 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.033131 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.033181 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.033203 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.033221 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.033334 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.033376 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.033436 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.034409 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.034449 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.034463 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.034469 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.034525 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.034541 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.034598 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.034875 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.034926 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.035354 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.035387 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.035398 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.035550 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.035579 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.036001 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.036046 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.036065 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.036520 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.036560 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.036592 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:22 crc kubenswrapper[4990]: E1205 01:08:22.068717 4990 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="400ms" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.076174 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.079350 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.079400 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.079631 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.079675 4990 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 05 01:08:22 crc kubenswrapper[4990]: E1205 01:08:22.081176 4990 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.145:6443: connect: connection refused" node="crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.096024 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.096192 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.096290 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.096397 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.096557 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.096697 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.096817 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.096944 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.097039 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.097138 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.097227 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.097312 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.097421 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.097538 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.097864 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.198697 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.198738 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.198762 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.198777 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.198795 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.198811 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.198825 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.198839 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.198855 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.198869 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.198886 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.198902 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.198916 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.198929 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.198943 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.199606 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.199646 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.199682 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.199704 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.199720 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.199722 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.199750 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.199754 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.199785 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.199762 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.199802 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.199805 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.199706 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.199867 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.199939 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.281620 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.283283 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.283338 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.283352 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.283378 4990 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 05 01:08:22 crc kubenswrapper[4990]: E1205 01:08:22.284032 4990 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.145:6443: connect: connection refused" node="crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.356799 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.364190 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.381963 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: W1205 01:08:22.387559 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-9eb040e35efd812afed27ca03d944ea037d445a39571106a0cd3d14c129f28a4 WatchSource:0}: Error finding container 9eb040e35efd812afed27ca03d944ea037d445a39571106a0cd3d14c129f28a4: Status 404 returned error can't find the container with id 9eb040e35efd812afed27ca03d944ea037d445a39571106a0cd3d14c129f28a4 Dec 05 01:08:22 crc kubenswrapper[4990]: W1205 01:08:22.392206 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-028dee3a8d8243066c692d3af184db622f7a622b5beb7f3f59c56502dc4c23e6 WatchSource:0}: Error finding container 028dee3a8d8243066c692d3af184db622f7a622b5beb7f3f59c56502dc4c23e6: Status 404 returned error can't find the container with id 028dee3a8d8243066c692d3af184db622f7a622b5beb7f3f59c56502dc4c23e6 Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.398198 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: W1205 01:08:22.403218 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-a881c16f91f6b0af46798c2f6bd19e6ca10c97cba01d9043e93f74ab33ef0cdb WatchSource:0}: Error finding container a881c16f91f6b0af46798c2f6bd19e6ca10c97cba01d9043e93f74ab33ef0cdb: Status 404 returned error can't find the container with id a881c16f91f6b0af46798c2f6bd19e6ca10c97cba01d9043e93f74ab33ef0cdb Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.403350 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 01:08:22 crc kubenswrapper[4990]: W1205 01:08:22.421795 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-c147435e662c704a30a449771e6cbbd1055b919084097ae89f4784ae89fcfdf6 WatchSource:0}: Error finding container c147435e662c704a30a449771e6cbbd1055b919084097ae89f4784ae89fcfdf6: Status 404 returned error can't find the container with id c147435e662c704a30a449771e6cbbd1055b919084097ae89f4784ae89fcfdf6 Dec 05 01:08:22 crc kubenswrapper[4990]: W1205 01:08:22.422573 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-708b80092655f04a37a83b13b5ca8df27a474c0a9e85ff01a84eb9f403aa64db WatchSource:0}: Error finding container 708b80092655f04a37a83b13b5ca8df27a474c0a9e85ff01a84eb9f403aa64db: Status 404 returned error can't find the container with id 708b80092655f04a37a83b13b5ca8df27a474c0a9e85ff01a84eb9f403aa64db Dec 05 01:08:22 crc kubenswrapper[4990]: E1205 01:08:22.470376 4990 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="800ms" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.684780 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.686833 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.686891 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.686912 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.686949 4990 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 05 01:08:22 crc kubenswrapper[4990]: E1205 01:08:22.687494 4990 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.145:6443: connect: connection refused" node="crc" Dec 05 01:08:22 crc kubenswrapper[4990]: W1205 01:08:22.838995 4990 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Dec 05 01:08:22 crc kubenswrapper[4990]: E1205 01:08:22.839099 4990 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.861385 4990 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.865549 4990 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 07:39:00.155096468 +0000 UTC Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.865617 4990 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 750h30m37.289483769s for next certificate rotation Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.934137 4990 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="768ff72b1ea934bc74cf79e63af47c25934013a54a255336bfcc59308cb7637f" exitCode=0 Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.934246 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"768ff72b1ea934bc74cf79e63af47c25934013a54a255336bfcc59308cb7637f"} Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.934425 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"9eb040e35efd812afed27ca03d944ea037d445a39571106a0cd3d14c129f28a4"} Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.934590 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.935949 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.935992 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.936002 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.937124 4990 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="eac40a101194d00ec5ee1c7595a4d0baecbf61dda5ae671e03df521f2397a22c" exitCode=0 Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.937208 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"eac40a101194d00ec5ee1c7595a4d0baecbf61dda5ae671e03df521f2397a22c"} Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.937275 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"708b80092655f04a37a83b13b5ca8df27a474c0a9e85ff01a84eb9f403aa64db"} Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.937453 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.938537 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.938570 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.938583 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.940217 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bc1a86cd848696e6b8d7296270d5d8b58ffca056fe95db283a4f494c87684ac2"} Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.940284 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c147435e662c704a30a449771e6cbbd1055b919084097ae89f4784ae89fcfdf6"} Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.943089 4990 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d" exitCode=0 Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.943172 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d"} Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.943223 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a881c16f91f6b0af46798c2f6bd19e6ca10c97cba01d9043e93f74ab33ef0cdb"} Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.943379 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.944253 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.944288 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.944306 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.945134 4990 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="56971ea5a2f9658447c676b7f041c4b73a2da23aa7e77476c085678847b3f5bf" exitCode=0 Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.945161 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"56971ea5a2f9658447c676b7f041c4b73a2da23aa7e77476c085678847b3f5bf"} Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.945180 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"028dee3a8d8243066c692d3af184db622f7a622b5beb7f3f59c56502dc4c23e6"} Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.945265 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.946133 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.946188 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.946201 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.946955 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.947759 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.947809 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:22 crc kubenswrapper[4990]: I1205 01:08:22.947829 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:23 crc kubenswrapper[4990]: W1205 01:08:23.016779 4990 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Dec 05 01:08:23 crc kubenswrapper[4990]: E1205 01:08:23.016875 4990 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Dec 05 01:08:23 crc kubenswrapper[4990]: W1205 01:08:23.019619 4990 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Dec 05 01:08:23 crc kubenswrapper[4990]: E1205 01:08:23.019664 4990 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Dec 05 01:08:23 crc kubenswrapper[4990]: W1205 01:08:23.110062 4990 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Dec 05 01:08:23 crc kubenswrapper[4990]: E1205 01:08:23.110141 4990 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Dec 05 01:08:23 crc kubenswrapper[4990]: E1205 01:08:23.271185 4990 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="1.6s" Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.488126 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.493177 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.493224 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.493239 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.493272 4990 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 05 01:08:23 crc kubenswrapper[4990]: E1205 01:08:23.493947 4990 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.145:6443: connect: connection refused" node="crc" Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.949763 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a5abd03392e388089cf716a7ea2eea41895e742cd173a3b217bbbd555e62c237"} Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.949823 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f954071f194ae52b5b005d748ce92ac2507ac58868aa9fadcf9afcf9b9d8f71a"} Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.949845 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6cd1e00c990d5f61ca755a13e8fb3a9e841975edc5dea3e2a51f715d2556c1c0"} Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.949983 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.950798 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.950827 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.950838 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.953122 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a6afe31db8331e521d8c92b070693d2997a7a26483bf351586b38ebb5869b53b"} Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.953190 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5844498d95e908f579a1ffcd6d0ba838c470c12cad2dfd89e8d4df5f7931cfbf"} Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.953205 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"407f3b7963c007e3baf021afe67f7c9836422245e3a9e89a2277ec1d98ff27d7"} Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.953218 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.954431 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.954464 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.954475 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.957405 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106"} Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.957446 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6"} Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.957462 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c"} Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.957474 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a"} Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.957508 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051"} Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.957633 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.958406 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.958453 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.958467 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.959180 4990 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b748b8e14910997e3269afafbb78dd6712f371920206b7043788c7b14d414db0" exitCode=0 Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.959260 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b748b8e14910997e3269afafbb78dd6712f371920206b7043788c7b14d414db0"} Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.959384 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.960143 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.960181 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.960192 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.961209 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"13bf0c2fdd19969720f93ba0e18489521ef2a13408e2b9cb3207a43d0258dada"} Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.961318 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.962187 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.962216 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:23 crc kubenswrapper[4990]: I1205 01:08:23.962228 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:24 crc kubenswrapper[4990]: I1205 01:08:24.263752 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 01:08:24 crc kubenswrapper[4990]: I1205 01:08:24.377398 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 01:08:24 crc kubenswrapper[4990]: I1205 01:08:24.969057 4990 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f09aa23f0a895c98e67bae431961eb767a5e420a88631982e1dc22706d42e70d" exitCode=0 Dec 05 01:08:24 crc kubenswrapper[4990]: I1205 01:08:24.969272 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:24 crc kubenswrapper[4990]: I1205 01:08:24.969351 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:24 crc kubenswrapper[4990]: I1205 01:08:24.969251 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f09aa23f0a895c98e67bae431961eb767a5e420a88631982e1dc22706d42e70d"} Dec 05 01:08:24 crc kubenswrapper[4990]: I1205 01:08:24.969409 4990 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 01:08:24 crc kubenswrapper[4990]: I1205 01:08:24.969517 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:24 crc kubenswrapper[4990]: I1205 01:08:24.969581 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:24 crc kubenswrapper[4990]: I1205 01:08:24.971313 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:24 crc kubenswrapper[4990]: I1205 01:08:24.971380 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:24 crc kubenswrapper[4990]: I1205 01:08:24.971391 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:24 crc kubenswrapper[4990]: I1205 01:08:24.971340 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:24 crc kubenswrapper[4990]: I1205 01:08:24.971423 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:24 crc kubenswrapper[4990]: I1205 01:08:24.971436 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:24 crc kubenswrapper[4990]: I1205 01:08:24.971445 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:24 crc kubenswrapper[4990]: I1205 01:08:24.971451 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:24 crc kubenswrapper[4990]: I1205 01:08:24.971409 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:24 crc kubenswrapper[4990]: I1205 01:08:24.971560 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:24 crc kubenswrapper[4990]: I1205 01:08:24.971532 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:24 crc kubenswrapper[4990]: I1205 01:08:24.971635 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:25 crc kubenswrapper[4990]: I1205 01:08:25.094786 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:25 crc kubenswrapper[4990]: I1205 01:08:25.096891 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:25 crc kubenswrapper[4990]: I1205 01:08:25.096947 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:25 crc kubenswrapper[4990]: I1205 01:08:25.096967 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:25 crc kubenswrapper[4990]: I1205 01:08:25.097002 4990 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 05 01:08:25 crc kubenswrapper[4990]: I1205 01:08:25.874850 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:08:25 crc kubenswrapper[4990]: I1205 01:08:25.979423 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4db69925b4a066352758f7e6228c4ca6a98dbb1a06db398b80f4e74ea290c3a9"} Dec 05 01:08:25 crc kubenswrapper[4990]: I1205 01:08:25.979494 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"129b3f19fbe6d1e61ac074571b0b829f209d9f3623d6d6e6b726b3a4c3d5b7ce"} Dec 05 01:08:25 crc kubenswrapper[4990]: I1205 01:08:25.979510 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e3e92476a5fa3f56266d159c0b8480dc5efe8c4cb697c85e5c2b6a6d607f802b"} Dec 05 01:08:25 crc kubenswrapper[4990]: I1205 01:08:25.979529 4990 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 01:08:25 crc kubenswrapper[4990]: I1205 01:08:25.979572 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:25 crc kubenswrapper[4990]: I1205 01:08:25.979590 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:25 crc kubenswrapper[4990]: I1205 01:08:25.980793 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:25 crc kubenswrapper[4990]: I1205 01:08:25.980824 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:25 crc kubenswrapper[4990]: I1205 01:08:25.980834 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:25 crc kubenswrapper[4990]: I1205 01:08:25.980922 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:25 crc kubenswrapper[4990]: I1205 01:08:25.980964 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:25 crc kubenswrapper[4990]: I1205 01:08:25.980989 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:26 crc kubenswrapper[4990]: I1205 01:08:26.529536 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 01:08:26 crc kubenswrapper[4990]: I1205 01:08:26.609091 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:08:26 crc kubenswrapper[4990]: I1205 01:08:26.830106 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:08:26 crc kubenswrapper[4990]: I1205 01:08:26.989413 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dd612abccb914ef31907e5f71d596a6e9db843122e382c60d48bd4f03f9f7163"} Dec 05 01:08:26 crc kubenswrapper[4990]: I1205 01:08:26.989500 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:26 crc kubenswrapper[4990]: I1205 01:08:26.989514 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"90817dfff4766f7c276362fb9a8b760d33fe0aa1f8ad94419c336199b866a834"} Dec 05 01:08:26 crc kubenswrapper[4990]: I1205 01:08:26.989588 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:26 crc kubenswrapper[4990]: I1205 01:08:26.989623 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:26 crc kubenswrapper[4990]: I1205 01:08:26.990738 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:26 crc kubenswrapper[4990]: I1205 01:08:26.990765 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:26 crc kubenswrapper[4990]: I1205 01:08:26.990775 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:26 crc kubenswrapper[4990]: I1205 01:08:26.998152 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:26 crc kubenswrapper[4990]: I1205 01:08:26.998185 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:26 crc kubenswrapper[4990]: I1205 01:08:26.998211 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:26 crc kubenswrapper[4990]: I1205 01:08:26.998225 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:26 crc kubenswrapper[4990]: I1205 01:08:26.998215 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:26 crc kubenswrapper[4990]: I1205 01:08:26.998249 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:27 crc kubenswrapper[4990]: I1205 01:08:27.054280 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 01:08:27 crc kubenswrapper[4990]: I1205 01:08:27.060422 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 01:08:27 crc kubenswrapper[4990]: I1205 01:08:27.992442 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:27 crc kubenswrapper[4990]: I1205 01:08:27.992543 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:27 crc kubenswrapper[4990]: I1205 01:08:27.992702 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:27 crc kubenswrapper[4990]: I1205 01:08:27.993762 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:27 crc kubenswrapper[4990]: I1205 01:08:27.993805 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:27 crc kubenswrapper[4990]: I1205 01:08:27.993817 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:27 crc kubenswrapper[4990]: I1205 01:08:27.993879 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:27 crc kubenswrapper[4990]: I1205 01:08:27.993925 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:27 crc kubenswrapper[4990]: I1205 01:08:27.993936 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:27 crc kubenswrapper[4990]: I1205 01:08:27.995122 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:27 crc kubenswrapper[4990]: I1205 01:08:27.995165 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:27 crc kubenswrapper[4990]: I1205 01:08:27.995182 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:28 crc kubenswrapper[4990]: I1205 01:08:28.996214 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:28 crc kubenswrapper[4990]: I1205 01:08:28.997843 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:28 crc kubenswrapper[4990]: I1205 01:08:28.997916 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:28 crc kubenswrapper[4990]: I1205 01:08:28.997944 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:29 crc kubenswrapper[4990]: I1205 01:08:29.529961 4990 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 01:08:29 crc kubenswrapper[4990]: I1205 01:08:29.530052 4990 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 01:08:30 crc kubenswrapper[4990]: I1205 01:08:30.741467 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 05 01:08:30 crc kubenswrapper[4990]: I1205 01:08:30.741782 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:30 crc kubenswrapper[4990]: I1205 01:08:30.743740 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:30 crc kubenswrapper[4990]: I1205 01:08:30.743799 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:30 crc kubenswrapper[4990]: I1205 01:08:30.743819 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:31 crc kubenswrapper[4990]: E1205 01:08:31.988370 4990 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 05 01:08:32 crc kubenswrapper[4990]: I1205 01:08:32.973961 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 01:08:32 crc kubenswrapper[4990]: I1205 01:08:32.974307 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:32 crc kubenswrapper[4990]: I1205 01:08:32.976526 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:32 crc kubenswrapper[4990]: I1205 01:08:32.976610 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:32 crc kubenswrapper[4990]: I1205 01:08:32.976639 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:32 crc kubenswrapper[4990]: I1205 01:08:32.983774 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 01:08:33 crc kubenswrapper[4990]: I1205 01:08:33.007041 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:33 crc kubenswrapper[4990]: I1205 01:08:33.008935 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:33 crc kubenswrapper[4990]: I1205 01:08:33.009018 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:33 crc kubenswrapper[4990]: I1205 01:08:33.009030 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:33 crc kubenswrapper[4990]: I1205 01:08:33.862355 4990 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 05 01:08:34 crc kubenswrapper[4990]: E1205 01:08:34.873050 4990 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Dec 05 01:08:35 crc kubenswrapper[4990]: E1205 01:08:35.098393 4990 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 05 01:08:35 crc kubenswrapper[4990]: W1205 01:08:35.261316 4990 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 05 01:08:35 crc kubenswrapper[4990]: I1205 01:08:35.261510 4990 trace.go:236] Trace[622816164]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (05-Dec-2025 01:08:25.258) (total time: 10002ms): Dec 05 01:08:35 crc kubenswrapper[4990]: Trace[622816164]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (01:08:35.261) Dec 05 01:08:35 crc kubenswrapper[4990]: Trace[622816164]: [10.002588364s] [10.002588364s] END Dec 05 01:08:35 crc kubenswrapper[4990]: E1205 01:08:35.261549 4990 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 05 01:08:35 crc kubenswrapper[4990]: W1205 01:08:35.297916 4990 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 05 01:08:35 crc kubenswrapper[4990]: I1205 01:08:35.298085 4990 trace.go:236] Trace[707981744]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (05-Dec-2025 01:08:25.296) (total time: 10001ms): Dec 05 01:08:35 crc kubenswrapper[4990]: Trace[707981744]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (01:08:35.297) Dec 05 01:08:35 crc kubenswrapper[4990]: Trace[707981744]: [10.001785572s] [10.001785572s] END Dec 05 01:08:35 crc kubenswrapper[4990]: E1205 01:08:35.298315 4990 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 05 01:08:35 crc kubenswrapper[4990]: I1205 01:08:35.492326 4990 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 05 01:08:35 crc kubenswrapper[4990]: I1205 01:08:35.492433 4990 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 05 01:08:35 crc kubenswrapper[4990]: I1205 01:08:35.501754 4990 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 05 01:08:35 crc kubenswrapper[4990]: I1205 01:08:35.501845 4990 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 05 01:08:36 crc kubenswrapper[4990]: I1205 01:08:36.073325 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 05 01:08:36 crc kubenswrapper[4990]: I1205 01:08:36.073568 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:36 crc kubenswrapper[4990]: I1205 01:08:36.074831 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:36 crc kubenswrapper[4990]: I1205 01:08:36.074883 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:36 crc kubenswrapper[4990]: I1205 01:08:36.074896 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:36 crc kubenswrapper[4990]: I1205 01:08:36.126184 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 05 01:08:36 crc kubenswrapper[4990]: I1205 01:08:36.637758 4990 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 05 01:08:36 crc kubenswrapper[4990]: [+]log ok Dec 05 01:08:36 crc kubenswrapper[4990]: [+]etcd ok Dec 05 01:08:36 crc kubenswrapper[4990]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 05 01:08:36 crc kubenswrapper[4990]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 05 01:08:36 crc kubenswrapper[4990]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 05 01:08:36 crc kubenswrapper[4990]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 05 01:08:36 crc kubenswrapper[4990]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 05 01:08:36 crc kubenswrapper[4990]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 05 01:08:36 crc kubenswrapper[4990]: [+]poststarthook/generic-apiserver-start-informers ok Dec 05 01:08:36 crc kubenswrapper[4990]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 05 01:08:36 crc kubenswrapper[4990]: [+]poststarthook/priority-and-fairness-filter ok Dec 05 01:08:36 crc kubenswrapper[4990]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 05 01:08:36 crc kubenswrapper[4990]: [+]poststarthook/start-apiextensions-informers ok Dec 05 01:08:36 crc kubenswrapper[4990]: [+]poststarthook/start-apiextensions-controllers ok Dec 05 01:08:36 crc kubenswrapper[4990]: [+]poststarthook/crd-informer-synced ok Dec 05 01:08:36 crc kubenswrapper[4990]: [+]poststarthook/start-system-namespaces-controller ok Dec 05 01:08:36 crc kubenswrapper[4990]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 05 01:08:36 crc kubenswrapper[4990]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 05 01:08:36 crc kubenswrapper[4990]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 05 01:08:36 crc kubenswrapper[4990]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 05 01:08:36 crc kubenswrapper[4990]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 05 01:08:36 crc kubenswrapper[4990]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 05 01:08:36 crc kubenswrapper[4990]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Dec 05 01:08:36 crc kubenswrapper[4990]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 05 01:08:36 crc kubenswrapper[4990]: [+]poststarthook/bootstrap-controller ok Dec 05 01:08:36 crc kubenswrapper[4990]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 05 01:08:36 crc kubenswrapper[4990]: [+]poststarthook/start-kube-aggregator-informers ok Dec 05 01:08:36 crc kubenswrapper[4990]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 05 01:08:36 crc kubenswrapper[4990]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 05 01:08:36 crc kubenswrapper[4990]: [+]poststarthook/apiservice-registration-controller ok Dec 05 01:08:36 crc kubenswrapper[4990]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 05 01:08:36 crc kubenswrapper[4990]: [+]poststarthook/apiservice-discovery-controller ok Dec 05 01:08:36 crc kubenswrapper[4990]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 05 01:08:36 crc kubenswrapper[4990]: [+]autoregister-completion ok Dec 05 01:08:36 crc kubenswrapper[4990]: [+]poststarthook/apiservice-openapi-controller ok Dec 05 01:08:36 crc kubenswrapper[4990]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 05 01:08:36 crc kubenswrapper[4990]: livez check failed Dec 05 01:08:36 crc kubenswrapper[4990]: I1205 01:08:36.637849 4990 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 01:08:37 crc kubenswrapper[4990]: I1205 01:08:37.017916 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:37 crc kubenswrapper[4990]: I1205 01:08:37.018791 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:37 crc kubenswrapper[4990]: I1205 01:08:37.018843 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:37 crc kubenswrapper[4990]: I1205 01:08:37.018857 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:37 crc kubenswrapper[4990]: I1205 01:08:37.029620 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 05 01:08:38 crc kubenswrapper[4990]: I1205 01:08:38.020659 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:38 crc kubenswrapper[4990]: I1205 01:08:38.022326 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:38 crc kubenswrapper[4990]: I1205 01:08:38.022374 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:38 crc kubenswrapper[4990]: I1205 01:08:38.022391 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:38 crc kubenswrapper[4990]: I1205 01:08:38.298544 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:38 crc kubenswrapper[4990]: I1205 01:08:38.299948 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:38 crc kubenswrapper[4990]: I1205 01:08:38.300004 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:38 crc kubenswrapper[4990]: I1205 01:08:38.300018 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:38 crc kubenswrapper[4990]: I1205 01:08:38.300046 4990 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 05 01:08:38 crc kubenswrapper[4990]: E1205 01:08:38.305014 4990 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 05 01:08:39 crc kubenswrapper[4990]: I1205 01:08:39.530051 4990 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 01:08:39 crc kubenswrapper[4990]: I1205 01:08:39.530158 4990 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.390616 4990 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.495946 4990 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.505248 4990 trace.go:236] Trace[1273025369]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (05-Dec-2025 01:08:25.642) (total time: 14863ms): Dec 05 01:08:40 crc kubenswrapper[4990]: Trace[1273025369]: ---"Objects listed" error: 14862ms (01:08:40.505) Dec 05 01:08:40 crc kubenswrapper[4990]: Trace[1273025369]: [14.863033348s] [14.863033348s] END Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.505291 4990 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.507174 4990 trace.go:236] Trace[1234843101]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (05-Dec-2025 01:08:26.010) (total time: 14496ms): Dec 05 01:08:40 crc kubenswrapper[4990]: Trace[1234843101]: ---"Objects listed" error: 14496ms (01:08:40.506) Dec 05 01:08:40 crc kubenswrapper[4990]: Trace[1234843101]: [14.496822247s] [14.496822247s] END Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.507394 4990 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.553090 4990 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:40958->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.553178 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:40958->192.168.126.11:17697: read: connection reset by peer" Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.553869 4990 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:40968->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.554023 4990 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:40968->192.168.126.11:17697: read: connection reset by peer" Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.863053 4990 apiserver.go:52] "Watching apiserver" Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.866113 4990 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.866520 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.867119 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.867125 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.867190 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.867238 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:08:40 crc kubenswrapper[4990]: E1205 01:08:40.867371 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.867443 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.867758 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:08:40 crc kubenswrapper[4990]: E1205 01:08:40.867821 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:08:40 crc kubenswrapper[4990]: E1205 01:08:40.867811 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.869063 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.869525 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.870801 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.870860 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.870862 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.871010 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.871093 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.871178 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.871290 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.895126 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.913101 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.931755 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.942752 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.957348 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.966531 4990 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.969753 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.981689 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.998547 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.998606 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.998638 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.998669 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.998704 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.998730 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.998754 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.998780 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.998805 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.998830 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.998853 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.998878 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.998903 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.998929 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.998982 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.999010 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.999034 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.999064 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.999090 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.999112 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.999136 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.999160 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.999184 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.999208 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.999231 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.999254 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.999280 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.999301 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 01:08:40 crc kubenswrapper[4990]: I1205 01:08:40.999324 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:40.999349 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:40.999371 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:40.999395 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:40.999422 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:40.999450 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:40.999495 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:40.999521 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:40.999549 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:40.999573 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:40.999596 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:40.999619 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:40.999654 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:40.999676 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:40.999701 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:40.999724 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:40.999746 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:40.999767 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:40.999790 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:40.999820 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:40.999846 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:40.999872 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:40.999901 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:40.999930 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:40.999956 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:40.999982 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.000012 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.000045 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.000070 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.000097 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.000127 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.000154 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.000180 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.000207 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.000233 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.000292 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.000320 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.000348 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.000377 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.000403 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.000436 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.000459 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.000499 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:40.999201 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.000527 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:40.999222 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.000554 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:40.999402 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:40.999907 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.000579 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:40.999912 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:40.999970 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.000048 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.000161 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.000413 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.000648 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.000467 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.000498 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.000680 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.000908 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.000988 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.001003 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.001055 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.001117 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.001142 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.001361 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.001627 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.001922 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.001932 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.002156 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.002324 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.002367 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.002394 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.002541 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.002570 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.002689 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.000604 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.002748 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.002779 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.002808 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.002834 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.002860 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.002975 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003010 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003030 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003051 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003071 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003088 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003107 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003126 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003145 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003141 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003168 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003236 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003262 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003296 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003335 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003369 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003415 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003455 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003521 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003556 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003591 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003627 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003665 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003702 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003744 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003778 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003816 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003853 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003885 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003920 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003953 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003988 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.004024 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.004057 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.004092 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.004131 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.004164 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.004197 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.004232 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.004291 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.004348 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.004392 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.004435 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.004504 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.004541 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.004577 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.004618 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.004653 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.004689 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.004725 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003368 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.018587 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003569 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003758 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003763 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.003976 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.004311 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.004434 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.004469 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: E1205 01:08:41.004783 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:08:41.504729769 +0000 UTC m=+19.880945350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.018769 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.018828 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.018867 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.018900 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.018932 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.018964 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.018992 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019022 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019051 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019079 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019109 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019141 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019171 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019201 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019228 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019256 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019285 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019314 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019344 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019375 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019402 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019429 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019454 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019493 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019524 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019556 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019584 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019614 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019644 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019676 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019701 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019726 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019752 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019795 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019826 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019852 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019875 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019906 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019933 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019967 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019992 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.020018 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.020048 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.020078 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.020103 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.020127 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.020150 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019159 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.020853 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019215 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.004925 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.004943 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.005037 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.005332 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.005608 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.006472 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.006596 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.006817 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.006925 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.006997 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.007165 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.007213 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.007429 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.007432 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.007498 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.007766 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.007806 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.008450 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.021165 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.014974 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.015125 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.015428 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.015460 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.015512 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.015789 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.015938 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.016012 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.016053 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.016815 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.016968 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.017217 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.017349 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.017383 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.017429 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.017545 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.017661 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.017741 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.017765 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.017876 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.017900 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.018049 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.018099 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.018144 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019349 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.019564 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.004901 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.020210 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.020690 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.021552 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.021819 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.021837 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.020823 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.021928 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.020845 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.021261 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.021366 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.021130 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.020982 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.022082 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.022121 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.022205 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.022261 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.022493 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.022617 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.022768 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.022818 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.022817 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.022967 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.023191 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.023687 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.024034 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.020685 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.024108 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.024144 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.024079 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.024567 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.024599 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.024710 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.024823 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.025107 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.025231 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.025277 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.025232 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.025826 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.025835 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.025843 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.026466 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.026688 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.026926 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.026995 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.027491 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.027892 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.028017 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.028170 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.028223 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.028252 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.028281 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.028454 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.028503 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.028532 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.028561 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.028594 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.028619 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.028648 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.028676 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.028705 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.028729 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.028756 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.028784 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.028813 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.028838 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.028862 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.028886 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.028985 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.029022 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.029079 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.029105 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.029359 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.029417 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.029469 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.029518 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.029550 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.029583 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.029618 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.029653 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.029679 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.029708 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.029859 4990 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.029883 4990 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.029901 4990 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.029916 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.029931 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.029946 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.029962 4990 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.029976 4990 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.029990 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030007 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030022 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030037 4990 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030056 4990 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030071 4990 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030085 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030100 4990 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030114 4990 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030128 4990 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030142 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030156 4990 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030169 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030181 4990 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030196 4990 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030209 4990 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030224 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030238 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030252 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030266 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030279 4990 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030294 4990 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030308 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030325 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030338 4990 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030350 4990 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030362 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030375 4990 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030388 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030400 4990 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030413 4990 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030425 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030440 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030454 4990 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030468 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030498 4990 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030513 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030527 4990 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030542 4990 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030556 4990 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030569 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030582 4990 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030594 4990 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030606 4990 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030618 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030631 4990 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030645 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030659 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030673 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030685 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030698 4990 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030710 4990 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030724 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030738 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030751 4990 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030766 4990 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030779 4990 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030792 4990 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030805 4990 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030819 4990 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030831 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030844 4990 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030858 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030871 4990 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030885 4990 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030899 4990 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030912 4990 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030925 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030940 4990 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030954 4990 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030967 4990 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030979 4990 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.030989 4990 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031000 4990 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031011 4990 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031022 4990 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031032 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031043 4990 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031052 4990 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031062 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031072 4990 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031081 4990 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031091 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031100 4990 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031109 4990 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031118 4990 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031127 4990 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031138 4990 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031148 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031156 4990 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031166 4990 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031174 4990 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031183 4990 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031193 4990 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031202 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031212 4990 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031221 4990 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031231 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031239 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031248 4990 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031258 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031267 4990 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031276 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031285 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031294 4990 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031302 4990 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031311 4990 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.028136 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.028214 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.028745 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.028804 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.028894 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.028980 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.029068 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.029145 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.029311 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031638 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031699 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031720 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031913 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031920 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031945 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.031995 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: E1205 01:08:41.032063 4990 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.032260 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.032326 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.032427 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.032610 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.032657 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.032929 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.033196 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.033329 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.033344 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.033362 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.033708 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.034063 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.034295 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.035171 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.034390 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.033852 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.035084 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.035537 4990 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.035668 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.035673 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 01:08:41 crc kubenswrapper[4990]: E1205 01:08:41.036040 4990 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.036652 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.037040 4990 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: E1205 01:08:41.037202 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 01:08:41.537166198 +0000 UTC m=+19.913381579 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.037732 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.037377 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.037583 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.038019 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.038038 4990 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.038055 4990 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.038072 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.038089 4990 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.038109 4990 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.038127 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.038143 4990 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.038158 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.038176 4990 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.038195 4990 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.038215 4990 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.038262 4990 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.038282 4990 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.038699 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: E1205 01:08:41.038821 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 01:08:41.538797726 +0000 UTC m=+19.915013087 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.041412 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.039027 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.039144 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.043424 4990 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.039242 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.039368 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.039584 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.039903 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.043523 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.043550 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.043572 4990 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.043594 4990 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.043612 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.043677 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.043705 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.044296 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.048507 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.052041 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.052653 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.052811 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.053160 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.053269 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.053342 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.053393 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.057723 4990 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106" exitCode=255 Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.058023 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106"} Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.058117 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.058378 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 01:08:41 crc kubenswrapper[4990]: E1205 01:08:41.059558 4990 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 01:08:41 crc kubenswrapper[4990]: E1205 01:08:41.059669 4990 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 01:08:41 crc kubenswrapper[4990]: E1205 01:08:41.059758 4990 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 01:08:41 crc kubenswrapper[4990]: E1205 01:08:41.059918 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 01:08:41.559899913 +0000 UTC m=+19.936115274 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 01:08:41 crc kubenswrapper[4990]: E1205 01:08:41.060658 4990 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 01:08:41 crc kubenswrapper[4990]: E1205 01:08:41.060717 4990 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 01:08:41 crc kubenswrapper[4990]: E1205 01:08:41.060737 4990 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 01:08:41 crc kubenswrapper[4990]: E1205 01:08:41.060864 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 01:08:41.560810069 +0000 UTC m=+19.937025600 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.063174 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.070097 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.082386 4990 scope.go:117] "RemoveContainer" containerID="97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.082639 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.083233 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.094712 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.097751 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.111833 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.114003 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.121838 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.135584 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.144945 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.144994 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145059 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145075 4990 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145091 4990 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145104 4990 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145116 4990 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145130 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145143 4990 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145154 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145167 4990 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145177 4990 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145189 4990 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145201 4990 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145214 4990 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145227 4990 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145238 4990 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145250 4990 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145263 4990 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145274 4990 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145285 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145297 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145308 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145318 4990 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145331 4990 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145346 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145359 4990 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145371 4990 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145384 4990 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145398 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145412 4990 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145426 4990 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145439 4990 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145452 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145466 4990 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145503 4990 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145519 4990 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145532 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145544 4990 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145557 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145569 4990 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145581 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145604 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145617 4990 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145629 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145642 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145654 4990 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145667 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145680 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145692 4990 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145704 4990 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145716 4990 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145728 4990 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145739 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145754 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145766 4990 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145778 4990 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145790 4990 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145801 4990 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145812 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.145984 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.146044 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.153468 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.163769 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.179248 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.179372 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.192082 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.197252 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.200342 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 01:08:41 crc kubenswrapper[4990]: W1205 01:08:41.207769 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-54b97f80c96abd24dcd1403ae1c06db2ddb71afb30f86adc4840045008da8cd8 WatchSource:0}: Error finding container 54b97f80c96abd24dcd1403ae1c06db2ddb71afb30f86adc4840045008da8cd8: Status 404 returned error can't find the container with id 54b97f80c96abd24dcd1403ae1c06db2ddb71afb30f86adc4840045008da8cd8 Dec 05 01:08:41 crc kubenswrapper[4990]: W1205 01:08:41.218895 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-1a31d166149e320d7ed96f3f49c7ecb29be4af329672d5dc3e575dd186e28055 WatchSource:0}: Error finding container 1a31d166149e320d7ed96f3f49c7ecb29be4af329672d5dc3e575dd186e28055: Status 404 returned error can't find the container with id 1a31d166149e320d7ed96f3f49c7ecb29be4af329672d5dc3e575dd186e28055 Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.483975 4990 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.548905 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.549028 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:08:41 crc kubenswrapper[4990]: E1205 01:08:41.549115 4990 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 01:08:41 crc kubenswrapper[4990]: E1205 01:08:41.549127 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:08:42.54909354 +0000 UTC m=+20.925308901 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:08:41 crc kubenswrapper[4990]: E1205 01:08:41.549191 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 01:08:42.549172563 +0000 UTC m=+20.925387974 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.549216 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:08:41 crc kubenswrapper[4990]: E1205 01:08:41.549304 4990 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 01:08:41 crc kubenswrapper[4990]: E1205 01:08:41.549342 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 01:08:42.549334547 +0000 UTC m=+20.925550008 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.616602 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.650266 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.650321 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:08:41 crc kubenswrapper[4990]: E1205 01:08:41.650472 4990 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 01:08:41 crc kubenswrapper[4990]: E1205 01:08:41.650515 4990 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 01:08:41 crc kubenswrapper[4990]: E1205 01:08:41.650530 4990 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 01:08:41 crc kubenswrapper[4990]: E1205 01:08:41.650525 4990 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 01:08:41 crc kubenswrapper[4990]: E1205 01:08:41.650568 4990 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 01:08:41 crc kubenswrapper[4990]: E1205 01:08:41.650582 4990 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 01:08:41 crc kubenswrapper[4990]: E1205 01:08:41.650596 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 01:08:42.650577629 +0000 UTC m=+21.026792990 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 01:08:41 crc kubenswrapper[4990]: E1205 01:08:41.650646 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 01:08:42.65062717 +0000 UTC m=+21.026842531 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.655024 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.669048 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.680769 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.695510 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.719570 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.733588 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.772455 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-wb424"] Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.772778 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-wb424" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.781784 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.783558 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.787204 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.805885 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.816359 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.873544 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:41Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.902190 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:41Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.922210 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:41Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.930256 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:08:41 crc kubenswrapper[4990]: E1205 01:08:41.930440 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.934909 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.935627 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.936695 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.937578 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.939880 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.940752 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.941408 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.946930 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.947750 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.948610 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.949258 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.950097 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.950710 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.951372 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.952020 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.952416 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7kcz\" (UniqueName: \"kubernetes.io/projected/0f072df2-6ddf-4707-8852-a60655293cc8-kube-api-access-k7kcz\") pod \"node-resolver-wb424\" (UID: \"0f072df2-6ddf-4707-8852-a60655293cc8\") " pod="openshift-dns/node-resolver-wb424" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.952501 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0f072df2-6ddf-4707-8852-a60655293cc8-hosts-file\") pod \"node-resolver-wb424\" (UID: \"0f072df2-6ddf-4707-8852-a60655293cc8\") " pod="openshift-dns/node-resolver-wb424" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.952706 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.955973 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.956473 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.957353 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.958564 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.959062 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.959687 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.960637 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.961509 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.961823 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:41Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.962719 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.963526 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.964868 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.965510 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.966699 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.967298 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.967961 4990 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.968099 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.975891 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.976537 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.978058 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.979893 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.980772 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.981074 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:41Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.982228 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.983125 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.986932 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.987654 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.988894 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.989789 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.990959 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.991535 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.992678 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.993338 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.996095 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 05 01:08:41 crc kubenswrapper[4990]: I1205 01:08:41.996932 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.004063 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.004365 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.004681 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.005605 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.007069 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.007716 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.028420 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.054155 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7kcz\" (UniqueName: \"kubernetes.io/projected/0f072df2-6ddf-4707-8852-a60655293cc8-kube-api-access-k7kcz\") pod \"node-resolver-wb424\" (UID: \"0f072df2-6ddf-4707-8852-a60655293cc8\") " pod="openshift-dns/node-resolver-wb424" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.054131 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.054537 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0f072df2-6ddf-4707-8852-a60655293cc8-hosts-file\") pod \"node-resolver-wb424\" (UID: \"0f072df2-6ddf-4707-8852-a60655293cc8\") " pod="openshift-dns/node-resolver-wb424" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.054920 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0f072df2-6ddf-4707-8852-a60655293cc8-hosts-file\") pod \"node-resolver-wb424\" (UID: \"0f072df2-6ddf-4707-8852-a60655293cc8\") " pod="openshift-dns/node-resolver-wb424" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.069059 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44"} Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.069464 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04"} Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.069581 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1a31d166149e320d7ed96f3f49c7ecb29be4af329672d5dc3e575dd186e28055"} Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.070760 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"54b97f80c96abd24dcd1403ae1c06db2ddb71afb30f86adc4840045008da8cd8"} Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.072859 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e"} Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.072915 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"bd995440413f677e65c4862df3acc60e1ceca5ce0ceaeddc13ea2e2368672748"} Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.074148 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.076315 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.079125 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217"} Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.079519 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.084683 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.094674 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7kcz\" (UniqueName: \"kubernetes.io/projected/0f072df2-6ddf-4707-8852-a60655293cc8-kube-api-access-k7kcz\") pod \"node-resolver-wb424\" (UID: \"0f072df2-6ddf-4707-8852-a60655293cc8\") " pod="openshift-dns/node-resolver-wb424" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.103408 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.122711 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.145152 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.160530 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.177390 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.191471 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-f6zb4"] Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.192455 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-rdhk7"] Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.192728 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.192943 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.195331 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.195389 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.195582 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.195678 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.195941 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.197003 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-zxlh5"] Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.197987 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.198650 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.206621 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.206708 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.206766 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.207037 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.209904 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.210099 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.210576 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.237022 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.256084 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.274711 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.286504 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.301805 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.316793 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.344617 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.357056 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/644fbc14-61e3-4544-b42b-da32f942c0bc-cni-binary-copy\") pod \"multus-additional-cni-plugins-f6zb4\" (UID: \"644fbc14-61e3-4544-b42b-da32f942c0bc\") " pod="openshift-multus/multus-additional-cni-plugins-f6zb4" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.357111 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-cnibin\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.357135 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c262\" (UniqueName: \"kubernetes.io/projected/b6580a04-67de-48f9-9da2-56cb4377af48-kube-api-access-6c262\") pod \"machine-config-daemon-zxlh5\" (UID: \"b6580a04-67de-48f9-9da2-56cb4377af48\") " pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.357163 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c45jp\" (UniqueName: \"kubernetes.io/projected/c4914133-b0cd-4d12-84d5-c99379e2324a-kube-api-access-c45jp\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.357189 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-host-run-netns\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.357209 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-etc-kubernetes\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.357231 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/644fbc14-61e3-4544-b42b-da32f942c0bc-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f6zb4\" (UID: \"644fbc14-61e3-4544-b42b-da32f942c0bc\") " pod="openshift-multus/multus-additional-cni-plugins-f6zb4" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.357297 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-multus-socket-dir-parent\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.357350 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-system-cni-dir\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.357367 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c4914133-b0cd-4d12-84d5-c99379e2324a-cni-binary-copy\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.357389 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-multus-cni-dir\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.357409 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-host-run-multus-certs\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.357425 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b6580a04-67de-48f9-9da2-56cb4377af48-rootfs\") pod \"machine-config-daemon-zxlh5\" (UID: \"b6580a04-67de-48f9-9da2-56cb4377af48\") " pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.357550 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-host-var-lib-cni-multus\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.357577 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-multus-conf-dir\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.357595 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b6580a04-67de-48f9-9da2-56cb4377af48-proxy-tls\") pod \"machine-config-daemon-zxlh5\" (UID: \"b6580a04-67de-48f9-9da2-56cb4377af48\") " pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.357625 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/644fbc14-61e3-4544-b42b-da32f942c0bc-os-release\") pod \"multus-additional-cni-plugins-f6zb4\" (UID: \"644fbc14-61e3-4544-b42b-da32f942c0bc\") " pod="openshift-multus/multus-additional-cni-plugins-f6zb4" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.357647 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/644fbc14-61e3-4544-b42b-da32f942c0bc-system-cni-dir\") pod \"multus-additional-cni-plugins-f6zb4\" (UID: \"644fbc14-61e3-4544-b42b-da32f942c0bc\") " pod="openshift-multus/multus-additional-cni-plugins-f6zb4" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.357670 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-host-var-lib-cni-bin\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.357689 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-hostroot\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.357864 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htx8r\" (UniqueName: \"kubernetes.io/projected/644fbc14-61e3-4544-b42b-da32f942c0bc-kube-api-access-htx8r\") pod \"multus-additional-cni-plugins-f6zb4\" (UID: \"644fbc14-61e3-4544-b42b-da32f942c0bc\") " pod="openshift-multus/multus-additional-cni-plugins-f6zb4" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.357943 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c4914133-b0cd-4d12-84d5-c99379e2324a-multus-daemon-config\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.357997 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/644fbc14-61e3-4544-b42b-da32f942c0bc-cnibin\") pod \"multus-additional-cni-plugins-f6zb4\" (UID: \"644fbc14-61e3-4544-b42b-da32f942c0bc\") " pod="openshift-multus/multus-additional-cni-plugins-f6zb4" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.358035 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-os-release\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.358069 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-host-run-k8s-cni-cncf-io\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.358101 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-host-var-lib-kubelet\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.358178 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b6580a04-67de-48f9-9da2-56cb4377af48-mcd-auth-proxy-config\") pod \"machine-config-daemon-zxlh5\" (UID: \"b6580a04-67de-48f9-9da2-56cb4377af48\") " pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.358227 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/644fbc14-61e3-4544-b42b-da32f942c0bc-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f6zb4\" (UID: \"644fbc14-61e3-4544-b42b-da32f942c0bc\") " pod="openshift-multus/multus-additional-cni-plugins-f6zb4" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.360728 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.371338 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.382965 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.385123 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-wb424" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.399407 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.459282 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b6580a04-67de-48f9-9da2-56cb4377af48-rootfs\") pod \"machine-config-daemon-zxlh5\" (UID: \"b6580a04-67de-48f9-9da2-56cb4377af48\") " pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.459363 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-host-run-multus-certs\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.459407 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-multus-conf-dir\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.459431 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b6580a04-67de-48f9-9da2-56cb4377af48-proxy-tls\") pod \"machine-config-daemon-zxlh5\" (UID: \"b6580a04-67de-48f9-9da2-56cb4377af48\") " pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.459455 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/644fbc14-61e3-4544-b42b-da32f942c0bc-os-release\") pod \"multus-additional-cni-plugins-f6zb4\" (UID: \"644fbc14-61e3-4544-b42b-da32f942c0bc\") " pod="openshift-multus/multus-additional-cni-plugins-f6zb4" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.459505 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-host-var-lib-cni-multus\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.459512 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b6580a04-67de-48f9-9da2-56cb4377af48-rootfs\") pod \"machine-config-daemon-zxlh5\" (UID: \"b6580a04-67de-48f9-9da2-56cb4377af48\") " pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.459574 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-host-run-multus-certs\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.459614 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/644fbc14-61e3-4544-b42b-da32f942c0bc-system-cni-dir\") pod \"multus-additional-cni-plugins-f6zb4\" (UID: \"644fbc14-61e3-4544-b42b-da32f942c0bc\") " pod="openshift-multus/multus-additional-cni-plugins-f6zb4" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.459533 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/644fbc14-61e3-4544-b42b-da32f942c0bc-system-cni-dir\") pod \"multus-additional-cni-plugins-f6zb4\" (UID: \"644fbc14-61e3-4544-b42b-da32f942c0bc\") " pod="openshift-multus/multus-additional-cni-plugins-f6zb4" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.459587 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-multus-conf-dir\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.459700 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-host-var-lib-cni-bin\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.459728 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-hostroot\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.459759 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htx8r\" (UniqueName: \"kubernetes.io/projected/644fbc14-61e3-4544-b42b-da32f942c0bc-kube-api-access-htx8r\") pod \"multus-additional-cni-plugins-f6zb4\" (UID: \"644fbc14-61e3-4544-b42b-da32f942c0bc\") " pod="openshift-multus/multus-additional-cni-plugins-f6zb4" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.459784 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c4914133-b0cd-4d12-84d5-c99379e2324a-multus-daemon-config\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.459606 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-host-var-lib-cni-multus\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.459824 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-hostroot\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.459836 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-os-release\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.459907 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-host-run-k8s-cni-cncf-io\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.460459 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-host-var-lib-kubelet\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.460127 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-os-release\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.460528 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/644fbc14-61e3-4544-b42b-da32f942c0bc-cnibin\") pod \"multus-additional-cni-plugins-f6zb4\" (UID: \"644fbc14-61e3-4544-b42b-da32f942c0bc\") " pod="openshift-multus/multus-additional-cni-plugins-f6zb4" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.460540 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-host-var-lib-kubelet\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.460555 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b6580a04-67de-48f9-9da2-56cb4377af48-mcd-auth-proxy-config\") pod \"machine-config-daemon-zxlh5\" (UID: \"b6580a04-67de-48f9-9da2-56cb4377af48\") " pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.460178 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-host-run-k8s-cni-cncf-io\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.460591 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/644fbc14-61e3-4544-b42b-da32f942c0bc-cnibin\") pod \"multus-additional-cni-plugins-f6zb4\" (UID: \"644fbc14-61e3-4544-b42b-da32f942c0bc\") " pod="openshift-multus/multus-additional-cni-plugins-f6zb4" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.459794 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-host-var-lib-cni-bin\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.460186 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/644fbc14-61e3-4544-b42b-da32f942c0bc-os-release\") pod \"multus-additional-cni-plugins-f6zb4\" (UID: \"644fbc14-61e3-4544-b42b-da32f942c0bc\") " pod="openshift-multus/multus-additional-cni-plugins-f6zb4" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.460646 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/644fbc14-61e3-4544-b42b-da32f942c0bc-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f6zb4\" (UID: \"644fbc14-61e3-4544-b42b-da32f942c0bc\") " pod="openshift-multus/multus-additional-cni-plugins-f6zb4" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.460728 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-cnibin\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.460755 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c262\" (UniqueName: \"kubernetes.io/projected/b6580a04-67de-48f9-9da2-56cb4377af48-kube-api-access-6c262\") pod \"machine-config-daemon-zxlh5\" (UID: \"b6580a04-67de-48f9-9da2-56cb4377af48\") " pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.460783 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/644fbc14-61e3-4544-b42b-da32f942c0bc-cni-binary-copy\") pod \"multus-additional-cni-plugins-f6zb4\" (UID: \"644fbc14-61e3-4544-b42b-da32f942c0bc\") " pod="openshift-multus/multus-additional-cni-plugins-f6zb4" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.460803 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-host-run-netns\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.460827 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-etc-kubernetes\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.460846 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c45jp\" (UniqueName: \"kubernetes.io/projected/c4914133-b0cd-4d12-84d5-c99379e2324a-kube-api-access-c45jp\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.460864 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/644fbc14-61e3-4544-b42b-da32f942c0bc-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f6zb4\" (UID: \"644fbc14-61e3-4544-b42b-da32f942c0bc\") " pod="openshift-multus/multus-additional-cni-plugins-f6zb4" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.460885 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-multus-socket-dir-parent\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.460905 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-system-cni-dir\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.460925 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c4914133-b0cd-4d12-84d5-c99379e2324a-cni-binary-copy\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.460923 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c4914133-b0cd-4d12-84d5-c99379e2324a-multus-daemon-config\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.460943 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-multus-cni-dir\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.461105 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-etc-kubernetes\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.461174 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-cnibin\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.461181 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-multus-cni-dir\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.461306 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-multus-socket-dir-parent\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.461387 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-system-cni-dir\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.461432 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c4914133-b0cd-4d12-84d5-c99379e2324a-host-run-netns\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.461535 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b6580a04-67de-48f9-9da2-56cb4377af48-mcd-auth-proxy-config\") pod \"machine-config-daemon-zxlh5\" (UID: \"b6580a04-67de-48f9-9da2-56cb4377af48\") " pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.462245 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c4914133-b0cd-4d12-84d5-c99379e2324a-cni-binary-copy\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.462712 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/644fbc14-61e3-4544-b42b-da32f942c0bc-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f6zb4\" (UID: \"644fbc14-61e3-4544-b42b-da32f942c0bc\") " pod="openshift-multus/multus-additional-cni-plugins-f6zb4" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.465006 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b6580a04-67de-48f9-9da2-56cb4377af48-proxy-tls\") pod \"machine-config-daemon-zxlh5\" (UID: \"b6580a04-67de-48f9-9da2-56cb4377af48\") " pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.465646 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/644fbc14-61e3-4544-b42b-da32f942c0bc-cni-binary-copy\") pod \"multus-additional-cni-plugins-f6zb4\" (UID: \"644fbc14-61e3-4544-b42b-da32f942c0bc\") " pod="openshift-multus/multus-additional-cni-plugins-f6zb4" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.477281 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htx8r\" (UniqueName: \"kubernetes.io/projected/644fbc14-61e3-4544-b42b-da32f942c0bc-kube-api-access-htx8r\") pod \"multus-additional-cni-plugins-f6zb4\" (UID: \"644fbc14-61e3-4544-b42b-da32f942c0bc\") " pod="openshift-multus/multus-additional-cni-plugins-f6zb4" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.480166 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c262\" (UniqueName: \"kubernetes.io/projected/b6580a04-67de-48f9-9da2-56cb4377af48-kube-api-access-6c262\") pod \"machine-config-daemon-zxlh5\" (UID: \"b6580a04-67de-48f9-9da2-56cb4377af48\") " pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.483883 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c45jp\" (UniqueName: \"kubernetes.io/projected/c4914133-b0cd-4d12-84d5-c99379e2324a-kube-api-access-c45jp\") pod \"multus-rdhk7\" (UID: \"c4914133-b0cd-4d12-84d5-c99379e2324a\") " pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.491367 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/644fbc14-61e3-4544-b42b-da32f942c0bc-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f6zb4\" (UID: \"644fbc14-61e3-4544-b42b-da32f942c0bc\") " pod="openshift-multus/multus-additional-cni-plugins-f6zb4" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.514655 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.523259 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rdhk7" Dec 05 01:08:42 crc kubenswrapper[4990]: W1205 01:08:42.529437 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod644fbc14_61e3_4544_b42b_da32f942c0bc.slice/crio-dff7bdcebb786d1c6731a34bf30d0106ce66f0731963a09aafc023ed0387fc7a WatchSource:0}: Error finding container dff7bdcebb786d1c6731a34bf30d0106ce66f0731963a09aafc023ed0387fc7a: Status 404 returned error can't find the container with id dff7bdcebb786d1c6731a34bf30d0106ce66f0731963a09aafc023ed0387fc7a Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.533641 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.562458 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.562609 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.562678 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:08:42 crc kubenswrapper[4990]: E1205 01:08:42.562771 4990 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 01:08:42 crc kubenswrapper[4990]: E1205 01:08:42.562772 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:08:44.562742976 +0000 UTC m=+22.938958337 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:08:42 crc kubenswrapper[4990]: E1205 01:08:42.562852 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 01:08:44.562840739 +0000 UTC m=+22.939056100 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 01:08:42 crc kubenswrapper[4990]: E1205 01:08:42.563090 4990 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 01:08:42 crc kubenswrapper[4990]: E1205 01:08:42.563235 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 01:08:44.56320738 +0000 UTC m=+22.939422741 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 01:08:42 crc kubenswrapper[4990]: W1205 01:08:42.573511 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6580a04_67de_48f9_9da2_56cb4377af48.slice/crio-744f6da4cf5c7bd40963939c709c92c42043459bbd60e5be7418ca1d4c0a1bb1 WatchSource:0}: Error finding container 744f6da4cf5c7bd40963939c709c92c42043459bbd60e5be7418ca1d4c0a1bb1: Status 404 returned error can't find the container with id 744f6da4cf5c7bd40963939c709c92c42043459bbd60e5be7418ca1d4c0a1bb1 Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.592362 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4w6g9"] Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.593240 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: W1205 01:08:42.598769 4990 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-config": failed to list *v1.ConfigMap: configmaps "ovnkube-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Dec 05 01:08:42 crc kubenswrapper[4990]: W1205 01:08:42.598790 4990 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": failed to list *v1.ConfigMap: configmaps "ovnkube-script-lib" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Dec 05 01:08:42 crc kubenswrapper[4990]: E1205 01:08:42.598822 4990 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 05 01:08:42 crc kubenswrapper[4990]: W1205 01:08:42.598778 4990 reflector.go:561] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Dec 05 01:08:42 crc kubenswrapper[4990]: E1205 01:08:42.598840 4990 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-script-lib\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 05 01:08:42 crc kubenswrapper[4990]: E1205 01:08:42.598849 4990 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 05 01:08:42 crc kubenswrapper[4990]: W1205 01:08:42.598849 4990 reflector.go:561] object-"openshift-ovn-kubernetes"/"env-overrides": failed to list *v1.ConfigMap: configmaps "env-overrides" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Dec 05 01:08:42 crc kubenswrapper[4990]: W1205 01:08:42.598877 4990 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": failed to list *v1.Secret: secrets "ovn-kubernetes-node-dockercfg-pwtwl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Dec 05 01:08:42 crc kubenswrapper[4990]: W1205 01:08:42.598912 4990 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": failed to list *v1.Secret: secrets "ovn-node-metrics-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Dec 05 01:08:42 crc kubenswrapper[4990]: W1205 01:08:42.598921 4990 reflector.go:561] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Dec 05 01:08:42 crc kubenswrapper[4990]: E1205 01:08:42.598936 4990 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-pwtwl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-node-dockercfg-pwtwl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 05 01:08:42 crc kubenswrapper[4990]: E1205 01:08:42.598895 4990 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"env-overrides\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 05 01:08:42 crc kubenswrapper[4990]: E1205 01:08:42.598951 4990 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 05 01:08:42 crc kubenswrapper[4990]: E1205 01:08:42.598926 4990 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-node-metrics-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.623155 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.647853 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.663399 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.663466 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:08:42 crc kubenswrapper[4990]: E1205 01:08:42.663666 4990 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 01:08:42 crc kubenswrapper[4990]: E1205 01:08:42.663691 4990 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 01:08:42 crc kubenswrapper[4990]: E1205 01:08:42.663707 4990 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 01:08:42 crc kubenswrapper[4990]: E1205 01:08:42.663769 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 01:08:44.66374696 +0000 UTC m=+23.039962321 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 01:08:42 crc kubenswrapper[4990]: E1205 01:08:42.664244 4990 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 01:08:42 crc kubenswrapper[4990]: E1205 01:08:42.664270 4990 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 01:08:42 crc kubenswrapper[4990]: E1205 01:08:42.664280 4990 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 01:08:42 crc kubenswrapper[4990]: E1205 01:08:42.664311 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 01:08:44.664301046 +0000 UTC m=+23.040516407 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.674670 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.703587 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.721564 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.736634 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.748269 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.762174 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.764044 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-etc-openvswitch\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.764094 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-log-socket\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.764117 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-cni-bin\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.764139 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-run-ovn\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.764233 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8ffk\" (UniqueName: \"kubernetes.io/projected/3eeec70d-1c5c-434e-90bc-95620458151c-kube-api-access-s8ffk\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.764297 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3eeec70d-1c5c-434e-90bc-95620458151c-env-overrides\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.764322 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.764381 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-run-netns\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.764409 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3eeec70d-1c5c-434e-90bc-95620458151c-ovnkube-config\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.764437 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-kubelet\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.764463 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3eeec70d-1c5c-434e-90bc-95620458151c-ovnkube-script-lib\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.764642 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-slash\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.764696 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-var-lib-openvswitch\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.764721 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-node-log\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.764779 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-run-openvswitch\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.764800 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-run-ovn-kubernetes\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.764830 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-cni-netd\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.764890 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-run-systemd\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.765012 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-systemd-units\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.765047 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3eeec70d-1c5c-434e-90bc-95620458151c-ovn-node-metrics-cert\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.777684 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.803733 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.818585 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.829776 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.866381 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-systemd-units\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.866452 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3eeec70d-1c5c-434e-90bc-95620458151c-ovn-node-metrics-cert\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.866511 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-etc-openvswitch\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.866535 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-log-socket\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.866562 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-cni-bin\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.866586 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-run-ovn\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.866608 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3eeec70d-1c5c-434e-90bc-95620458151c-env-overrides\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.866609 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-systemd-units\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.866640 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8ffk\" (UniqueName: \"kubernetes.io/projected/3eeec70d-1c5c-434e-90bc-95620458151c-kube-api-access-s8ffk\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.866655 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-etc-openvswitch\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.866714 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.866707 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-cni-bin\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.866654 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-log-socket\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.866765 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.866822 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-run-ovn\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.866908 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-run-netns\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.866869 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-run-netns\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.866961 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3eeec70d-1c5c-434e-90bc-95620458151c-ovnkube-config\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.866990 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-kubelet\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.867013 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3eeec70d-1c5c-434e-90bc-95620458151c-ovnkube-script-lib\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.867051 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-slash\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.867072 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-var-lib-openvswitch\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.867084 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-kubelet\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.867109 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-node-log\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.867142 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-slash\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.867145 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-run-openvswitch\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.867175 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-run-ovn-kubernetes\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.867192 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-cni-netd\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.867196 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-var-lib-openvswitch\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.867175 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-run-openvswitch\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.867237 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-node-log\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.867248 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-run-systemd\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.867246 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-cni-netd\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.867262 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-run-ovn-kubernetes\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.867220 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-run-systemd\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.929611 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:08:42 crc kubenswrapper[4990]: I1205 01:08:42.929650 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:08:42 crc kubenswrapper[4990]: E1205 01:08:42.929774 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:08:42 crc kubenswrapper[4990]: E1205 01:08:42.929970 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.084228 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" event={"ID":"644fbc14-61e3-4544-b42b-da32f942c0bc","Type":"ContainerStarted","Data":"c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f"} Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.084538 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" event={"ID":"644fbc14-61e3-4544-b42b-da32f942c0bc","Type":"ContainerStarted","Data":"dff7bdcebb786d1c6731a34bf30d0106ce66f0731963a09aafc023ed0387fc7a"} Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.086372 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-wb424" event={"ID":"0f072df2-6ddf-4707-8852-a60655293cc8","Type":"ContainerStarted","Data":"e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338"} Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.086454 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-wb424" event={"ID":"0f072df2-6ddf-4707-8852-a60655293cc8","Type":"ContainerStarted","Data":"5680aafcb6e8df71aac097132dd01006371f5eb6a1d6bd5f6c8178cb8d2e4c2b"} Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.088328 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" event={"ID":"b6580a04-67de-48f9-9da2-56cb4377af48","Type":"ContainerStarted","Data":"712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6"} Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.088369 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" event={"ID":"b6580a04-67de-48f9-9da2-56cb4377af48","Type":"ContainerStarted","Data":"2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11"} Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.088386 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" event={"ID":"b6580a04-67de-48f9-9da2-56cb4377af48","Type":"ContainerStarted","Data":"744f6da4cf5c7bd40963939c709c92c42043459bbd60e5be7418ca1d4c0a1bb1"} Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.089494 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rdhk7" event={"ID":"c4914133-b0cd-4d12-84d5-c99379e2324a","Type":"ContainerStarted","Data":"65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566"} Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.089566 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rdhk7" event={"ID":"c4914133-b0cd-4d12-84d5-c99379e2324a","Type":"ContainerStarted","Data":"993ba063dcc60671ec0568a87a8e36e04a3804391461893159433591b851344e"} Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.102863 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:43Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.122415 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:43Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.135146 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:43Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.149336 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:43Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.161219 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:43Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.175089 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:43Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.189270 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:43Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.203081 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:43Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.217198 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:43Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.231624 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:43Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.257218 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:43Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.275288 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:43Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.295245 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:43Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.307889 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:43Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.324747 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:43Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.338612 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:43Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.363352 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:43Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.402943 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:43Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.423585 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:43Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.425711 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.430640 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3eeec70d-1c5c-434e-90bc-95620458151c-ovn-node-metrics-cert\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.436240 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:43Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.447345 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:43Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.457350 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.461940 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:43Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.475170 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:43Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.495564 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:43Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.842883 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.863509 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8ffk\" (UniqueName: \"kubernetes.io/projected/3eeec70d-1c5c-434e-90bc-95620458151c-kube-api-access-s8ffk\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:43 crc kubenswrapper[4990]: E1205 01:08:43.867786 4990 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/env-overrides: failed to sync configmap cache: timed out waiting for the condition Dec 05 01:08:43 crc kubenswrapper[4990]: E1205 01:08:43.867826 4990 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-script-lib: failed to sync configmap cache: timed out waiting for the condition Dec 05 01:08:43 crc kubenswrapper[4990]: E1205 01:08:43.867849 4990 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-config: failed to sync configmap cache: timed out waiting for the condition Dec 05 01:08:43 crc kubenswrapper[4990]: E1205 01:08:43.867925 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3eeec70d-1c5c-434e-90bc-95620458151c-ovnkube-script-lib podName:3eeec70d-1c5c-434e-90bc-95620458151c nodeName:}" failed. No retries permitted until 2025-12-05 01:08:44.367897399 +0000 UTC m=+22.744112760 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-script-lib" (UniqueName: "kubernetes.io/configmap/3eeec70d-1c5c-434e-90bc-95620458151c-ovnkube-script-lib") pod "ovnkube-node-4w6g9" (UID: "3eeec70d-1c5c-434e-90bc-95620458151c") : failed to sync configmap cache: timed out waiting for the condition Dec 05 01:08:43 crc kubenswrapper[4990]: E1205 01:08:43.867956 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3eeec70d-1c5c-434e-90bc-95620458151c-env-overrides podName:3eeec70d-1c5c-434e-90bc-95620458151c nodeName:}" failed. No retries permitted until 2025-12-05 01:08:44.36794257 +0000 UTC m=+22.744158131 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "env-overrides" (UniqueName: "kubernetes.io/configmap/3eeec70d-1c5c-434e-90bc-95620458151c-env-overrides") pod "ovnkube-node-4w6g9" (UID: "3eeec70d-1c5c-434e-90bc-95620458151c") : failed to sync configmap cache: timed out waiting for the condition Dec 05 01:08:43 crc kubenswrapper[4990]: E1205 01:08:43.867980 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3eeec70d-1c5c-434e-90bc-95620458151c-ovnkube-config podName:3eeec70d-1c5c-434e-90bc-95620458151c nodeName:}" failed. No retries permitted until 2025-12-05 01:08:44.367969861 +0000 UTC m=+22.744185432 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-config" (UniqueName: "kubernetes.io/configmap/3eeec70d-1c5c-434e-90bc-95620458151c-ovnkube-config") pod "ovnkube-node-4w6g9" (UID: "3eeec70d-1c5c-434e-90bc-95620458151c") : failed to sync configmap cache: timed out waiting for the condition Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.907314 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 05 01:08:43 crc kubenswrapper[4990]: I1205 01:08:43.929980 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:08:43 crc kubenswrapper[4990]: E1205 01:08:43.930171 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.083131 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.093921 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e"} Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.095818 4990 generic.go:334] "Generic (PLEG): container finished" podID="644fbc14-61e3-4544-b42b-da32f942c0bc" containerID="c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f" exitCode=0 Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.096038 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" event={"ID":"644fbc14-61e3-4544-b42b-da32f942c0bc","Type":"ContainerDied","Data":"c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f"} Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.109982 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:44Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.113332 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.136884 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:44Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.158734 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:44Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.173246 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:44Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.187312 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:44Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.190258 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.202002 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:44Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.214195 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:44Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.231778 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:44Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.246381 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:44Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.264884 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:44Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.279277 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:44Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.293989 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:44Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.316582 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:44Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.330409 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:44Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.344855 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:44Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.361867 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:44Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.373957 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:44Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.387696 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:44Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.393301 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3eeec70d-1c5c-434e-90bc-95620458151c-env-overrides\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.393369 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3eeec70d-1c5c-434e-90bc-95620458151c-ovnkube-config\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.393403 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3eeec70d-1c5c-434e-90bc-95620458151c-ovnkube-script-lib\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.394341 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3eeec70d-1c5c-434e-90bc-95620458151c-env-overrides\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.394440 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3eeec70d-1c5c-434e-90bc-95620458151c-ovnkube-script-lib\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.394594 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3eeec70d-1c5c-434e-90bc-95620458151c-ovnkube-config\") pod \"ovnkube-node-4w6g9\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.404843 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:44Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.412740 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.421802 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:44Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.454437 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:44Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.468736 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:44Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.480767 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:44Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.492384 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:44Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.596388 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.596546 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.596613 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:08:44 crc kubenswrapper[4990]: E1205 01:08:44.596695 4990 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 01:08:44 crc kubenswrapper[4990]: E1205 01:08:44.596699 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:08:48.596661125 +0000 UTC m=+26.972876496 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:08:44 crc kubenswrapper[4990]: E1205 01:08:44.596760 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 01:08:48.596742897 +0000 UTC m=+26.972958258 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 01:08:44 crc kubenswrapper[4990]: E1205 01:08:44.596841 4990 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 01:08:44 crc kubenswrapper[4990]: E1205 01:08:44.596865 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 01:08:48.59685906 +0000 UTC m=+26.973074421 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.698359 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.698416 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:08:44 crc kubenswrapper[4990]: E1205 01:08:44.698615 4990 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 01:08:44 crc kubenswrapper[4990]: E1205 01:08:44.698617 4990 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 01:08:44 crc kubenswrapper[4990]: E1205 01:08:44.698665 4990 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 01:08:44 crc kubenswrapper[4990]: E1205 01:08:44.698685 4990 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 01:08:44 crc kubenswrapper[4990]: E1205 01:08:44.698784 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 01:08:48.698758351 +0000 UTC m=+27.074973712 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 01:08:44 crc kubenswrapper[4990]: E1205 01:08:44.698637 4990 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 01:08:44 crc kubenswrapper[4990]: E1205 01:08:44.698814 4990 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 01:08:44 crc kubenswrapper[4990]: E1205 01:08:44.698845 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 01:08:48.698836633 +0000 UTC m=+27.075052204 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.705690 4990 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.708430 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.708496 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.708512 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.708689 4990 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.715970 4990 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.716416 4990 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.717838 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.717981 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.718104 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.718218 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.718403 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:44Z","lastTransitionTime":"2025-12-05T01:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:44 crc kubenswrapper[4990]: E1205 01:08:44.739573 4990 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2415bd45-5145-44bb-b5a4-8197e19c19f6\\\",\\\"systemUUID\\\":\\\"ce964c17-1cf3-4471-84ac-c2fc1079c2f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:44Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.743241 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.743298 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.743314 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.743333 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.743349 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:44Z","lastTransitionTime":"2025-12-05T01:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:44 crc kubenswrapper[4990]: E1205 01:08:44.755670 4990 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2415bd45-5145-44bb-b5a4-8197e19c19f6\\\",\\\"systemUUID\\\":\\\"ce964c17-1cf3-4471-84ac-c2fc1079c2f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:44Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.759529 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.759586 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.759601 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.759624 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.759638 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:44Z","lastTransitionTime":"2025-12-05T01:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:44 crc kubenswrapper[4990]: E1205 01:08:44.773118 4990 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2415bd45-5145-44bb-b5a4-8197e19c19f6\\\",\\\"systemUUID\\\":\\\"ce964c17-1cf3-4471-84ac-c2fc1079c2f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:44Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.776981 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.777032 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.777049 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.777072 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.777088 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:44Z","lastTransitionTime":"2025-12-05T01:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:44 crc kubenswrapper[4990]: E1205 01:08:44.789877 4990 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2415bd45-5145-44bb-b5a4-8197e19c19f6\\\",\\\"systemUUID\\\":\\\"ce964c17-1cf3-4471-84ac-c2fc1079c2f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:44Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.793401 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.793442 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.793450 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.793473 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.793504 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:44Z","lastTransitionTime":"2025-12-05T01:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:44 crc kubenswrapper[4990]: E1205 01:08:44.806716 4990 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2415bd45-5145-44bb-b5a4-8197e19c19f6\\\",\\\"systemUUID\\\":\\\"ce964c17-1cf3-4471-84ac-c2fc1079c2f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:44Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:44 crc kubenswrapper[4990]: E1205 01:08:44.806871 4990 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.813279 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.813339 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.813353 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.813684 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.813716 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:44Z","lastTransitionTime":"2025-12-05T01:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.916507 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.916550 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.916564 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.916583 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.916597 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:44Z","lastTransitionTime":"2025-12-05T01:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.929776 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:08:44 crc kubenswrapper[4990]: I1205 01:08:44.929846 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:08:44 crc kubenswrapper[4990]: E1205 01:08:44.929939 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:08:44 crc kubenswrapper[4990]: E1205 01:08:44.930330 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.020195 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.020251 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.020263 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.020300 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.020315 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:45Z","lastTransitionTime":"2025-12-05T01:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.101770 4990 generic.go:334] "Generic (PLEG): container finished" podID="3eeec70d-1c5c-434e-90bc-95620458151c" containerID="02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5" exitCode=0 Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.101840 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" event={"ID":"3eeec70d-1c5c-434e-90bc-95620458151c","Type":"ContainerDied","Data":"02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5"} Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.102088 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" event={"ID":"3eeec70d-1c5c-434e-90bc-95620458151c","Type":"ContainerStarted","Data":"6c7c225b4bf95d5db30b60660c1195bdb1d9576ebe954015a100962731b2df0f"} Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.104158 4990 generic.go:334] "Generic (PLEG): container finished" podID="644fbc14-61e3-4544-b42b-da32f942c0bc" containerID="6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1" exitCode=0 Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.104210 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" event={"ID":"644fbc14-61e3-4544-b42b-da32f942c0bc","Type":"ContainerDied","Data":"6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1"} Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.115065 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.123497 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.123905 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.123915 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.123934 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.123945 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:45Z","lastTransitionTime":"2025-12-05T01:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.133491 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.150769 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.174786 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.191213 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.203617 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.213664 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.226762 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.226802 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.226831 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.226850 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.226860 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:45Z","lastTransitionTime":"2025-12-05T01:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.232101 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.252740 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.265781 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.281318 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.301960 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.318185 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.329374 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.329420 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.329432 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.329450 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.329461 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:45Z","lastTransitionTime":"2025-12-05T01:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.333693 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.347894 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.364125 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.377119 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.397558 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.412548 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.426544 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.434103 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.434150 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.434162 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.434182 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.434196 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:45Z","lastTransitionTime":"2025-12-05T01:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.453648 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.471727 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.490041 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.513049 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.536959 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.537032 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.537049 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.537076 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.537096 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:45Z","lastTransitionTime":"2025-12-05T01:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.640170 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.640218 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.640231 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.640253 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.640266 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:45Z","lastTransitionTime":"2025-12-05T01:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.670609 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-vlg2t"] Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.671039 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-vlg2t" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.675419 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.675545 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.675628 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.675768 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.687318 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.701350 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.708936 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ccea29b1-256e-417e-985e-ca477e0b8d7b-serviceca\") pod \"node-ca-vlg2t\" (UID: \"ccea29b1-256e-417e-985e-ca477e0b8d7b\") " pod="openshift-image-registry/node-ca-vlg2t" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.708983 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ccea29b1-256e-417e-985e-ca477e0b8d7b-host\") pod \"node-ca-vlg2t\" (UID: \"ccea29b1-256e-417e-985e-ca477e0b8d7b\") " pod="openshift-image-registry/node-ca-vlg2t" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.709008 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmsw7\" (UniqueName: \"kubernetes.io/projected/ccea29b1-256e-417e-985e-ca477e0b8d7b-kube-api-access-kmsw7\") pod \"node-ca-vlg2t\" (UID: \"ccea29b1-256e-417e-985e-ca477e0b8d7b\") " pod="openshift-image-registry/node-ca-vlg2t" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.722447 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.739386 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.742968 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.742995 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.743006 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.743025 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.743036 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:45Z","lastTransitionTime":"2025-12-05T01:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.754816 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.771779 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.785772 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.798262 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vlg2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccea29b1-256e-417e-985e-ca477e0b8d7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vlg2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.810334 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ccea29b1-256e-417e-985e-ca477e0b8d7b-host\") pod \"node-ca-vlg2t\" (UID: \"ccea29b1-256e-417e-985e-ca477e0b8d7b\") " pod="openshift-image-registry/node-ca-vlg2t" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.810373 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmsw7\" (UniqueName: \"kubernetes.io/projected/ccea29b1-256e-417e-985e-ca477e0b8d7b-kube-api-access-kmsw7\") pod \"node-ca-vlg2t\" (UID: \"ccea29b1-256e-417e-985e-ca477e0b8d7b\") " pod="openshift-image-registry/node-ca-vlg2t" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.810423 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ccea29b1-256e-417e-985e-ca477e0b8d7b-serviceca\") pod \"node-ca-vlg2t\" (UID: \"ccea29b1-256e-417e-985e-ca477e0b8d7b\") " pod="openshift-image-registry/node-ca-vlg2t" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.810525 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ccea29b1-256e-417e-985e-ca477e0b8d7b-host\") pod \"node-ca-vlg2t\" (UID: \"ccea29b1-256e-417e-985e-ca477e0b8d7b\") " pod="openshift-image-registry/node-ca-vlg2t" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.811380 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ccea29b1-256e-417e-985e-ca477e0b8d7b-serviceca\") pod \"node-ca-vlg2t\" (UID: \"ccea29b1-256e-417e-985e-ca477e0b8d7b\") " pod="openshift-image-registry/node-ca-vlg2t" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.814873 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.829977 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmsw7\" (UniqueName: \"kubernetes.io/projected/ccea29b1-256e-417e-985e-ca477e0b8d7b-kube-api-access-kmsw7\") pod \"node-ca-vlg2t\" (UID: \"ccea29b1-256e-417e-985e-ca477e0b8d7b\") " pod="openshift-image-registry/node-ca-vlg2t" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.835576 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.846976 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.847129 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.847190 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.847252 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.847319 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:45Z","lastTransitionTime":"2025-12-05T01:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.848976 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.865328 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.881371 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:45Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.930213 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:08:45 crc kubenswrapper[4990]: E1205 01:08:45.930444 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.949856 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.949925 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.949947 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.949976 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.949998 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:45Z","lastTransitionTime":"2025-12-05T01:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:45 crc kubenswrapper[4990]: I1205 01:08:45.987197 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-vlg2t" Dec 05 01:08:46 crc kubenswrapper[4990]: W1205 01:08:46.001319 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podccea29b1_256e_417e_985e_ca477e0b8d7b.slice/crio-3ae668146dec3b767c0f06ab0ebf258d6438b4b5b9db4406bdf1e7498acb560a WatchSource:0}: Error finding container 3ae668146dec3b767c0f06ab0ebf258d6438b4b5b9db4406bdf1e7498acb560a: Status 404 returned error can't find the container with id 3ae668146dec3b767c0f06ab0ebf258d6438b4b5b9db4406bdf1e7498acb560a Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.053627 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.053667 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.053682 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.053704 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.053717 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:46Z","lastTransitionTime":"2025-12-05T01:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.119750 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" event={"ID":"3eeec70d-1c5c-434e-90bc-95620458151c","Type":"ContainerStarted","Data":"27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691"} Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.119813 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" event={"ID":"3eeec70d-1c5c-434e-90bc-95620458151c","Type":"ContainerStarted","Data":"cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e"} Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.119824 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" event={"ID":"3eeec70d-1c5c-434e-90bc-95620458151c","Type":"ContainerStarted","Data":"1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896"} Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.119836 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" event={"ID":"3eeec70d-1c5c-434e-90bc-95620458151c","Type":"ContainerStarted","Data":"97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05"} Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.119846 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" event={"ID":"3eeec70d-1c5c-434e-90bc-95620458151c","Type":"ContainerStarted","Data":"d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b"} Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.119856 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" event={"ID":"3eeec70d-1c5c-434e-90bc-95620458151c","Type":"ContainerStarted","Data":"758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38"} Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.120846 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-vlg2t" event={"ID":"ccea29b1-256e-417e-985e-ca477e0b8d7b","Type":"ContainerStarted","Data":"3ae668146dec3b767c0f06ab0ebf258d6438b4b5b9db4406bdf1e7498acb560a"} Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.123457 4990 generic.go:334] "Generic (PLEG): container finished" podID="644fbc14-61e3-4544-b42b-da32f942c0bc" containerID="0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d" exitCode=0 Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.123524 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" event={"ID":"644fbc14-61e3-4544-b42b-da32f942c0bc","Type":"ContainerDied","Data":"0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d"} Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.140673 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:46Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.154809 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:46Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.163660 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.163711 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.163724 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.163746 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.163759 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:46Z","lastTransitionTime":"2025-12-05T01:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.171571 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:46Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.187797 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:46Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.204841 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:46Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.218653 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:46Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.232840 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:46Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.246682 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:46Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.266910 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.266972 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.266989 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.267018 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.267036 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:46Z","lastTransitionTime":"2025-12-05T01:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.267389 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:46Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.281389 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vlg2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccea29b1-256e-417e-985e-ca477e0b8d7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vlg2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:46Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.297381 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:46Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.315710 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:46Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.355217 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:46Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.370856 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.370911 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.370924 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.370945 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.370959 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:46Z","lastTransitionTime":"2025-12-05T01:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.474051 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.474114 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.474133 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.474152 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.474164 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:46Z","lastTransitionTime":"2025-12-05T01:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.536475 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.543284 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.549174 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.557323 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:46Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.576801 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.576908 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.576939 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.576980 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.577010 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:46Z","lastTransitionTime":"2025-12-05T01:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.578967 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:46Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.592751 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:46Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.610461 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:46Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.625805 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:46Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.641288 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:46Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.655586 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:46Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.679874 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.679925 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.679939 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.679959 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.679974 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:46Z","lastTransitionTime":"2025-12-05T01:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.694783 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:46Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.734362 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:46Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.777438 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:46Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.783031 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.783097 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.783111 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.783138 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.783153 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:46Z","lastTransitionTime":"2025-12-05T01:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.821336 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:46Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.872339 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:46Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.887569 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.887616 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.887625 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.887645 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.887658 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:46Z","lastTransitionTime":"2025-12-05T01:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.894369 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vlg2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccea29b1-256e-417e-985e-ca477e0b8d7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vlg2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:46Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.929372 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.929421 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:08:46 crc kubenswrapper[4990]: E1205 01:08:46.929552 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:08:46 crc kubenswrapper[4990]: E1205 01:08:46.929768 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.939014 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:46Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.977033 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:46Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.990670 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.990779 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.990801 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.990870 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:46 crc kubenswrapper[4990]: I1205 01:08:46.990892 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:46Z","lastTransitionTime":"2025-12-05T01:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.015854 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:47Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.062868 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:47Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.096075 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.096154 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.096172 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.096203 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.096225 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:47Z","lastTransitionTime":"2025-12-05T01:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.101746 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:47Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.130858 4990 generic.go:334] "Generic (PLEG): container finished" podID="644fbc14-61e3-4544-b42b-da32f942c0bc" containerID="f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7" exitCode=0 Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.130936 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" event={"ID":"644fbc14-61e3-4544-b42b-da32f942c0bc","Type":"ContainerDied","Data":"f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7"} Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.133397 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-vlg2t" event={"ID":"ccea29b1-256e-417e-985e-ca477e0b8d7b","Type":"ContainerStarted","Data":"b5535c15a27a700aeea04a7cd4a8bec6709fe1de66fc4dc8e9edb9d7d00900ed"} Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.142574 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:47Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.180957 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:47Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.200552 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.200631 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.200644 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.200691 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.200708 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:47Z","lastTransitionTime":"2025-12-05T01:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.218261 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:47Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.255199 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:47Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.298741 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:47Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.303787 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.303882 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.303898 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.304561 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.304599 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:47Z","lastTransitionTime":"2025-12-05T01:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.337745 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a14f02-7a4f-422f-a8c6-c611162c1bd3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407f3b7963c007e3baf021afe67f7c9836422245e3a9e89a2277ec1d98ff27d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1a86cd848696e6b8d7296270d5d8b58ffca056fe95db283a4f494c87684ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5844498d95e908f579a1ffcd6d0ba838c470c12cad2dfd89e8d4df5f7931cfbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6afe31db8331e521d8c92b070693d2997a7a26483bf351586b38ebb5869b53b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:47Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.379911 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:47Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.409022 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.409085 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.409102 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.409129 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.409145 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:47Z","lastTransitionTime":"2025-12-05T01:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.423983 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:47Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.457721 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vlg2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccea29b1-256e-417e-985e-ca477e0b8d7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vlg2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:47Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.500068 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:47Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.511760 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.511806 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.511822 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.511846 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.511863 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:47Z","lastTransitionTime":"2025-12-05T01:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.547186 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:47Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.582253 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a14f02-7a4f-422f-a8c6-c611162c1bd3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407f3b7963c007e3baf021afe67f7c9836422245e3a9e89a2277ec1d98ff27d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1a86cd848696e6b8d7296270d5d8b58ffca056fe95db283a4f494c87684ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5844498d95e908f579a1ffcd6d0ba838c470c12cad2dfd89e8d4df5f7931cfbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6afe31db8331e521d8c92b070693d2997a7a26483bf351586b38ebb5869b53b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:47Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.615614 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.615692 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.615711 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.615748 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.615768 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:47Z","lastTransitionTime":"2025-12-05T01:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.623942 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:47Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.672670 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:47Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.698404 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vlg2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccea29b1-256e-417e-985e-ca477e0b8d7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5535c15a27a700aeea04a7cd4a8bec6709fe1de66fc4dc8e9edb9d7d00900ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vlg2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:47Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.719058 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.719130 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.719154 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.719184 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.719203 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:47Z","lastTransitionTime":"2025-12-05T01:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.746630 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:47Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.782848 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:47Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.818174 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:47Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.823552 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.823619 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.823636 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.823661 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.823676 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:47Z","lastTransitionTime":"2025-12-05T01:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.862443 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:47Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.896569 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:47Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.926050 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.926113 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.926130 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.926158 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.926177 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:47Z","lastTransitionTime":"2025-12-05T01:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.930108 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:08:47 crc kubenswrapper[4990]: E1205 01:08:47.930247 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.938878 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:47Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:47 crc kubenswrapper[4990]: I1205 01:08:47.975441 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:47Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.012378 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:48Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.029435 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.029501 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.029512 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.029529 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.029540 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:48Z","lastTransitionTime":"2025-12-05T01:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.131798 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.131832 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.131841 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.131858 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.131868 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:48Z","lastTransitionTime":"2025-12-05T01:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.140627 4990 generic.go:334] "Generic (PLEG): container finished" podID="644fbc14-61e3-4544-b42b-da32f942c0bc" containerID="692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200" exitCode=0 Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.140799 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" event={"ID":"644fbc14-61e3-4544-b42b-da32f942c0bc","Type":"ContainerDied","Data":"692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200"} Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.147905 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" event={"ID":"3eeec70d-1c5c-434e-90bc-95620458151c","Type":"ContainerStarted","Data":"de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d"} Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.169176 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a14f02-7a4f-422f-a8c6-c611162c1bd3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407f3b7963c007e3baf021afe67f7c9836422245e3a9e89a2277ec1d98ff27d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1a86cd848696e6b8d7296270d5d8b58ffca056fe95db283a4f494c87684ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5844498d95e908f579a1ffcd6d0ba838c470c12cad2dfd89e8d4df5f7931cfbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6afe31db8331e521d8c92b070693d2997a7a26483bf351586b38ebb5869b53b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:48Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.189782 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:48Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.229697 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:48Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.237569 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.237939 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.238150 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.238397 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.238632 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:48Z","lastTransitionTime":"2025-12-05T01:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.247213 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vlg2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccea29b1-256e-417e-985e-ca477e0b8d7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5535c15a27a700aeea04a7cd4a8bec6709fe1de66fc4dc8e9edb9d7d00900ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vlg2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:48Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.264343 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:48Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.280343 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:48Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.292962 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:48Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.338661 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:48Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.342450 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.342512 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.342524 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.342547 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.342561 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:48Z","lastTransitionTime":"2025-12-05T01:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.374168 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:48Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.414872 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:48Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.445537 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.445578 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.445591 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.445609 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.445622 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:48Z","lastTransitionTime":"2025-12-05T01:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.454652 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:48Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.496541 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:48Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.533643 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:48Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.548917 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.548971 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.548984 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.549007 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.549021 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:48Z","lastTransitionTime":"2025-12-05T01:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.580565 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:48Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.642559 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.643273 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.643432 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:08:48 crc kubenswrapper[4990]: E1205 01:08:48.643620 4990 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 01:08:48 crc kubenswrapper[4990]: E1205 01:08:48.643715 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 01:08:56.6436887 +0000 UTC m=+35.019904081 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 01:08:48 crc kubenswrapper[4990]: E1205 01:08:48.644415 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:08:56.644397111 +0000 UTC m=+35.020612492 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:08:48 crc kubenswrapper[4990]: E1205 01:08:48.644578 4990 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 01:08:48 crc kubenswrapper[4990]: E1205 01:08:48.644661 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 01:08:56.644616877 +0000 UTC m=+35.020832248 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.653658 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.653926 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.653944 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.653972 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.653993 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:48Z","lastTransitionTime":"2025-12-05T01:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.745047 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.745137 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:08:48 crc kubenswrapper[4990]: E1205 01:08:48.745336 4990 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 01:08:48 crc kubenswrapper[4990]: E1205 01:08:48.745383 4990 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 01:08:48 crc kubenswrapper[4990]: E1205 01:08:48.745405 4990 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 01:08:48 crc kubenswrapper[4990]: E1205 01:08:48.745338 4990 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 01:08:48 crc kubenswrapper[4990]: E1205 01:08:48.745512 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 01:08:56.745459366 +0000 UTC m=+35.121674737 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 01:08:48 crc kubenswrapper[4990]: E1205 01:08:48.745514 4990 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 01:08:48 crc kubenswrapper[4990]: E1205 01:08:48.745548 4990 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 01:08:48 crc kubenswrapper[4990]: E1205 01:08:48.745589 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 01:08:56.74557466 +0000 UTC m=+35.121790031 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.759029 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.759073 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.759085 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.759109 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.759122 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:48Z","lastTransitionTime":"2025-12-05T01:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.864724 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.864783 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.864794 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.864813 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.864824 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:48Z","lastTransitionTime":"2025-12-05T01:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.929826 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:08:48 crc kubenswrapper[4990]: E1205 01:08:48.930058 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.930820 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:08:48 crc kubenswrapper[4990]: E1205 01:08:48.930957 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.968518 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.968579 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.968597 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.968623 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:48 crc kubenswrapper[4990]: I1205 01:08:48.968642 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:48Z","lastTransitionTime":"2025-12-05T01:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.072607 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.072726 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.072739 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.072762 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.072776 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:49Z","lastTransitionTime":"2025-12-05T01:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.154909 4990 generic.go:334] "Generic (PLEG): container finished" podID="644fbc14-61e3-4544-b42b-da32f942c0bc" containerID="4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53" exitCode=0 Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.154960 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" event={"ID":"644fbc14-61e3-4544-b42b-da32f942c0bc","Type":"ContainerDied","Data":"4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53"} Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.176785 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.176818 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.176827 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.176843 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.176851 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:49Z","lastTransitionTime":"2025-12-05T01:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.186828 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:49Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.223165 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:49Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.260339 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:49Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.273082 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vlg2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccea29b1-256e-417e-985e-ca477e0b8d7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5535c15a27a700aeea04a7cd4a8bec6709fe1de66fc4dc8e9edb9d7d00900ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vlg2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:49Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.278852 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.278887 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.278896 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.278913 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.278928 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:49Z","lastTransitionTime":"2025-12-05T01:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.287416 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a14f02-7a4f-422f-a8c6-c611162c1bd3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407f3b7963c007e3baf021afe67f7c9836422245e3a9e89a2277ec1d98ff27d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1a86cd848696e6b8d7296270d5d8b58ffca056fe95db283a4f494c87684ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5844498d95e908f579a1ffcd6d0ba838c470c12cad2dfd89e8d4df5f7931cfbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6afe31db8331e521d8c92b070693d2997a7a26483bf351586b38ebb5869b53b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:49Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.302446 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:49Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.317596 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:49Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.328457 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:49Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.343042 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:49Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.360106 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:49Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.377630 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:49Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.381688 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.381731 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.381744 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.381760 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.381769 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:49Z","lastTransitionTime":"2025-12-05T01:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.392530 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:49Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.409588 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:49Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.424956 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:49Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.485744 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.485809 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.485819 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.485840 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.485851 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:49Z","lastTransitionTime":"2025-12-05T01:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.589087 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.589126 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.589135 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.589150 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.589160 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:49Z","lastTransitionTime":"2025-12-05T01:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.692066 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.692107 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.692115 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.692131 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.692140 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:49Z","lastTransitionTime":"2025-12-05T01:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.795042 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.795082 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.795090 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.795104 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.795115 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:49Z","lastTransitionTime":"2025-12-05T01:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.898222 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.898285 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.898304 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.898332 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.898350 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:49Z","lastTransitionTime":"2025-12-05T01:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:49 crc kubenswrapper[4990]: I1205 01:08:49.930174 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:08:49 crc kubenswrapper[4990]: E1205 01:08:49.930450 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.003218 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.003284 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.003298 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.003320 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.003335 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:50Z","lastTransitionTime":"2025-12-05T01:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.106956 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.106996 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.107009 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.107030 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.107044 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:50Z","lastTransitionTime":"2025-12-05T01:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.162263 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" event={"ID":"644fbc14-61e3-4544-b42b-da32f942c0bc","Type":"ContainerStarted","Data":"531a3fe35c53fc02d57c85ec09e66ca43962c444bf7a59abb676020240ed91b4"} Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.177745 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:50Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.193900 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531a3fe35c53fc02d57c85ec09e66ca43962c444bf7a59abb676020240ed91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:50Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.209465 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a14f02-7a4f-422f-a8c6-c611162c1bd3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407f3b7963c007e3baf021afe67f7c9836422245e3a9e89a2277ec1d98ff27d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1a86cd848696e6b8d7296270d5d8b58ffca056fe95db283a4f494c87684ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5844498d95e908f579a1ffcd6d0ba838c470c12cad2dfd89e8d4df5f7931cfbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6afe31db8331e521d8c92b070693d2997a7a26483bf351586b38ebb5869b53b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:50Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.210468 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.210587 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.210612 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.210657 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.210685 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:50Z","lastTransitionTime":"2025-12-05T01:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.233364 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:50Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.270899 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:50Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.286829 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vlg2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccea29b1-256e-417e-985e-ca477e0b8d7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5535c15a27a700aeea04a7cd4a8bec6709fe1de66fc4dc8e9edb9d7d00900ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vlg2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:50Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.306881 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:50Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.313073 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.313153 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.313176 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.313207 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.313227 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:50Z","lastTransitionTime":"2025-12-05T01:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.327278 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:50Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.344309 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:50Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.367185 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:50Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.383596 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:50Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.399676 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:50Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.414072 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:50Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.416050 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.416090 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.416099 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.416122 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.416133 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:50Z","lastTransitionTime":"2025-12-05T01:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.428614 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:50Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.519076 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.519110 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.519122 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.519139 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.519151 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:50Z","lastTransitionTime":"2025-12-05T01:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.623011 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.623065 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.623079 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.623101 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.623116 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:50Z","lastTransitionTime":"2025-12-05T01:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.726841 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.727311 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.727326 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.727347 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.727359 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:50Z","lastTransitionTime":"2025-12-05T01:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.830583 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.830632 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.830646 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.830668 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.830684 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:50Z","lastTransitionTime":"2025-12-05T01:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.930307 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.930306 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:08:50 crc kubenswrapper[4990]: E1205 01:08:50.930516 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:08:50 crc kubenswrapper[4990]: E1205 01:08:50.930543 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.932576 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.932603 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.932613 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.932625 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:50 crc kubenswrapper[4990]: I1205 01:08:50.932638 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:50Z","lastTransitionTime":"2025-12-05T01:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.035857 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.035905 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.035918 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.035943 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.035966 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:51Z","lastTransitionTime":"2025-12-05T01:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.139322 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.139402 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.139422 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.139449 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.139466 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:51Z","lastTransitionTime":"2025-12-05T01:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.174208 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" event={"ID":"3eeec70d-1c5c-434e-90bc-95620458151c","Type":"ContainerStarted","Data":"9227d73d6693c98e44c90c0ba1688a5c43772643b063f695014f97de72f812a6"} Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.174942 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.199870 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:51Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.208309 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.219870 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:51Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.239087 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:51Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.242074 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.242124 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.242143 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.242173 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.242237 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:51Z","lastTransitionTime":"2025-12-05T01:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.266912 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:51Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.282363 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:51Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.296098 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:51Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.321279 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531a3fe35c53fc02d57c85ec09e66ca43962c444bf7a59abb676020240ed91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:51Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.338307 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a14f02-7a4f-422f-a8c6-c611162c1bd3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407f3b7963c007e3baf021afe67f7c9836422245e3a9e89a2277ec1d98ff27d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1a86cd848696e6b8d7296270d5d8b58ffca056fe95db283a4f494c87684ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5844498d95e908f579a1ffcd6d0ba838c470c12cad2dfd89e8d4df5f7931cfbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6afe31db8331e521d8c92b070693d2997a7a26483bf351586b38ebb5869b53b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:51Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.346606 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.346663 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.346676 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.346698 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.346711 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:51Z","lastTransitionTime":"2025-12-05T01:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.355573 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:51Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.382823 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9227d73d6693c98e44c90c0ba1688a5c43772643b063f695014f97de72f812a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:51Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.397076 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vlg2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccea29b1-256e-417e-985e-ca477e0b8d7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5535c15a27a700aeea04a7cd4a8bec6709fe1de66fc4dc8e9edb9d7d00900ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vlg2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:51Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.410679 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:51Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.427016 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:51Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.439410 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:51Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.449262 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.449468 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.449597 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.449696 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.449761 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:51Z","lastTransitionTime":"2025-12-05T01:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.454965 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a14f02-7a4f-422f-a8c6-c611162c1bd3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407f3b7963c007e3baf021afe67f7c9836422245e3a9e89a2277ec1d98ff27d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1a86cd848696e6b8d7296270d5d8b58ffca056fe95db283a4f494c87684ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5844498d95e908f579a1ffcd6d0ba838c470c12cad2dfd89e8d4df5f7931cfbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6afe31db8331e521d8c92b070693d2997a7a26483bf351586b38ebb5869b53b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:51Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.470249 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:51Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.490205 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9227d73d6693c98e44c90c0ba1688a5c43772643b063f695014f97de72f812a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:51Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.503726 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vlg2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccea29b1-256e-417e-985e-ca477e0b8d7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5535c15a27a700aeea04a7cd4a8bec6709fe1de66fc4dc8e9edb9d7d00900ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vlg2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:51Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.518462 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:51Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.536830 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:51Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.549852 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:51Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.552564 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.552613 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.552624 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.552639 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.552649 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:51Z","lastTransitionTime":"2025-12-05T01:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.573365 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:51Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.589897 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:51Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.604234 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:51Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.619001 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:51Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.635761 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:51Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.648667 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:51Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.655888 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.655950 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.655971 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.655999 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.656017 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:51Z","lastTransitionTime":"2025-12-05T01:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.670409 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531a3fe35c53fc02d57c85ec09e66ca43962c444bf7a59abb676020240ed91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:51Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.759834 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.759878 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.759890 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.759909 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.759924 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:51Z","lastTransitionTime":"2025-12-05T01:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.862021 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.862061 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.862076 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.862095 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.862106 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:51Z","lastTransitionTime":"2025-12-05T01:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.930625 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:08:51 crc kubenswrapper[4990]: E1205 01:08:51.930798 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.948521 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:51Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.963588 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:51Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.964780 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.964806 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.964817 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.964840 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.964850 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:51Z","lastTransitionTime":"2025-12-05T01:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:51 crc kubenswrapper[4990]: I1205 01:08:51.981943 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:51Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.005854 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:52Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.026942 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:52Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.048222 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:52Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.067340 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.067392 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.067447 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.067497 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.067546 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:52Z","lastTransitionTime":"2025-12-05T01:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.072321 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:52Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.091506 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:52Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.107847 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:52Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.126502 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531a3fe35c53fc02d57c85ec09e66ca43962c444bf7a59abb676020240ed91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:52Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.140838 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:52Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.169185 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9227d73d6693c98e44c90c0ba1688a5c43772643b063f695014f97de72f812a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:52Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.170661 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.170717 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.170729 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.170751 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.170765 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:52Z","lastTransitionTime":"2025-12-05T01:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.177072 4990 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.178025 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.192324 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vlg2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccea29b1-256e-417e-985e-ca477e0b8d7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5535c15a27a700aeea04a7cd4a8bec6709fe1de66fc4dc8e9edb9d7d00900ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vlg2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:52Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.218519 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a14f02-7a4f-422f-a8c6-c611162c1bd3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407f3b7963c007e3baf021afe67f7c9836422245e3a9e89a2277ec1d98ff27d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1a86cd848696e6b8d7296270d5d8b58ffca056fe95db283a4f494c87684ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5844498d95e908f579a1ffcd6d0ba838c470c12cad2dfd89e8d4df5f7931cfbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6afe31db8331e521d8c92b070693d2997a7a26483bf351586b38ebb5869b53b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:52Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.223706 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.241288 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:52Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.258975 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531a3fe35c53fc02d57c85ec09e66ca43962c444bf7a59abb676020240ed91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:52Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.273066 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.273116 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.273127 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.273155 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.273168 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:52Z","lastTransitionTime":"2025-12-05T01:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.278697 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9227d73d6693c98e44c90c0ba1688a5c43772643b063f695014f97de72f812a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:52Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.290517 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vlg2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccea29b1-256e-417e-985e-ca477e0b8d7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5535c15a27a700aeea04a7cd4a8bec6709fe1de66fc4dc8e9edb9d7d00900ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vlg2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:52Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.302497 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a14f02-7a4f-422f-a8c6-c611162c1bd3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407f3b7963c007e3baf021afe67f7c9836422245e3a9e89a2277ec1d98ff27d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1a86cd848696e6b8d7296270d5d8b58ffca056fe95db283a4f494c87684ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5844498d95e908f579a1ffcd6d0ba838c470c12cad2dfd89e8d4df5f7931cfbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6afe31db8331e521d8c92b070693d2997a7a26483bf351586b38ebb5869b53b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:52Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.316243 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:52Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.329519 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:52Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.342387 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:52Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.359456 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:52Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.372714 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:52Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.376916 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.376982 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.376997 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.377019 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.377041 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:52Z","lastTransitionTime":"2025-12-05T01:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.395047 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:52Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.410334 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:52Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.426257 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:52Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.441776 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:52Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.480026 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.480068 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.480078 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.480095 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.480106 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:52Z","lastTransitionTime":"2025-12-05T01:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.583395 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.583687 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.583784 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.583865 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.583930 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:52Z","lastTransitionTime":"2025-12-05T01:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.687039 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.687087 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.687107 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.687125 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.687138 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:52Z","lastTransitionTime":"2025-12-05T01:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.789915 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.789969 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.789982 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.790000 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.790013 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:52Z","lastTransitionTime":"2025-12-05T01:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.892534 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.892645 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.892666 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.892699 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.892722 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:52Z","lastTransitionTime":"2025-12-05T01:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.929992 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.930033 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:08:52 crc kubenswrapper[4990]: E1205 01:08:52.930152 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:08:52 crc kubenswrapper[4990]: E1205 01:08:52.930386 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.995465 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.995519 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.995529 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.995549 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:52 crc kubenswrapper[4990]: I1205 01:08:52.995563 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:52Z","lastTransitionTime":"2025-12-05T01:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.099047 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.099116 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.099134 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.099160 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.099223 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:53Z","lastTransitionTime":"2025-12-05T01:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.181041 4990 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.202272 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.202321 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.202334 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.202359 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.202374 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:53Z","lastTransitionTime":"2025-12-05T01:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.305841 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.306164 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.306302 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.306415 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.306528 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:53Z","lastTransitionTime":"2025-12-05T01:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.410004 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.410072 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.410095 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.410133 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.410160 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:53Z","lastTransitionTime":"2025-12-05T01:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.513620 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.513689 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.513712 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.513744 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.513767 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:53Z","lastTransitionTime":"2025-12-05T01:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.617820 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.617882 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.617907 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.617938 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.617961 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:53Z","lastTransitionTime":"2025-12-05T01:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.721829 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.721916 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.721944 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.721977 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.722000 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:53Z","lastTransitionTime":"2025-12-05T01:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.825356 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.825413 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.825429 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.825454 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.825471 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:53Z","lastTransitionTime":"2025-12-05T01:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.928967 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.929036 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.929064 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.929095 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.929119 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:53Z","lastTransitionTime":"2025-12-05T01:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:53 crc kubenswrapper[4990]: I1205 01:08:53.929420 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:08:53 crc kubenswrapper[4990]: E1205 01:08:53.929666 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.032753 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.032807 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.032821 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.032867 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.032885 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:54Z","lastTransitionTime":"2025-12-05T01:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.136114 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.136170 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.136183 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.136206 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.136221 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:54Z","lastTransitionTime":"2025-12-05T01:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.188007 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4w6g9_3eeec70d-1c5c-434e-90bc-95620458151c/ovnkube-controller/0.log" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.192641 4990 generic.go:334] "Generic (PLEG): container finished" podID="3eeec70d-1c5c-434e-90bc-95620458151c" containerID="9227d73d6693c98e44c90c0ba1688a5c43772643b063f695014f97de72f812a6" exitCode=1 Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.192701 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" event={"ID":"3eeec70d-1c5c-434e-90bc-95620458151c","Type":"ContainerDied","Data":"9227d73d6693c98e44c90c0ba1688a5c43772643b063f695014f97de72f812a6"} Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.199092 4990 scope.go:117] "RemoveContainer" containerID="9227d73d6693c98e44c90c0ba1688a5c43772643b063f695014f97de72f812a6" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.224602 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a14f02-7a4f-422f-a8c6-c611162c1bd3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407f3b7963c007e3baf021afe67f7c9836422245e3a9e89a2277ec1d98ff27d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1a86cd848696e6b8d7296270d5d8b58ffca056fe95db283a4f494c87684ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5844498d95e908f579a1ffcd6d0ba838c470c12cad2dfd89e8d4df5f7931cfbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6afe31db8331e521d8c92b070693d2997a7a26483bf351586b38ebb5869b53b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:54Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.240094 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.240164 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.240190 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.240223 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.240248 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:54Z","lastTransitionTime":"2025-12-05T01:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.246384 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:54Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.279837 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9227d73d6693c98e44c90c0ba1688a5c43772643b063f695014f97de72f812a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9227d73d6693c98e44c90c0ba1688a5c43772643b063f695014f97de72f812a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T01:08:53Z\\\",\\\"message\\\":\\\"ns/factory.go:140\\\\nI1205 01:08:53.172650 6266 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1205 01:08:53.172759 6266 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1205 01:08:53.173533 6266 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1205 01:08:53.173553 6266 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1205 01:08:53.173595 6266 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1205 01:08:53.173653 6266 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1205 01:08:53.173775 6266 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1205 01:08:53.173793 6266 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1205 01:08:53.173850 6266 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 01:08:53.173881 6266 factory.go:656] Stopping watch factory\\\\nI1205 01:08:53.173904 6266 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1205 01:08:53.173919 6266 handler.go:208] Removed *v1.Node event handler 7\\\\nI1205 01:08:53.173938 6266 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:54Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.296069 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vlg2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccea29b1-256e-417e-985e-ca477e0b8d7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5535c15a27a700aeea04a7cd4a8bec6709fe1de66fc4dc8e9edb9d7d00900ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vlg2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:54Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.317904 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:54Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.339008 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:54Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.344903 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.344945 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.344954 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.344974 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.344988 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:54Z","lastTransitionTime":"2025-12-05T01:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.354395 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:54Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.374750 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:54Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.394783 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:54Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.417272 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:54Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.430544 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l"] Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.431695 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.435732 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.435744 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.447681 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.447721 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.447733 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.447752 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.447766 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:54Z","lastTransitionTime":"2025-12-05T01:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.448169 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:54Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.464474 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:54Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.483961 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:54Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.503392 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531a3fe35c53fc02d57c85ec09e66ca43962c444bf7a59abb676020240ed91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:54Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.506625 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9brj\" (UniqueName: \"kubernetes.io/projected/8617140c-972f-4ec0-b814-350305fff19f-kube-api-access-t9brj\") pod \"ovnkube-control-plane-749d76644c-pss5l\" (UID: \"8617140c-972f-4ec0-b814-350305fff19f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.506722 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8617140c-972f-4ec0-b814-350305fff19f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-pss5l\" (UID: \"8617140c-972f-4ec0-b814-350305fff19f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.506873 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8617140c-972f-4ec0-b814-350305fff19f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-pss5l\" (UID: \"8617140c-972f-4ec0-b814-350305fff19f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.507019 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8617140c-972f-4ec0-b814-350305fff19f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-pss5l\" (UID: \"8617140c-972f-4ec0-b814-350305fff19f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.518363 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:54Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.542034 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:54Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.550819 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.550913 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.550961 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.550993 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.551046 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:54Z","lastTransitionTime":"2025-12-05T01:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.561348 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:54Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.578347 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:54Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.596827 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:54Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.608216 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8617140c-972f-4ec0-b814-350305fff19f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-pss5l\" (UID: \"8617140c-972f-4ec0-b814-350305fff19f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.608324 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8617140c-972f-4ec0-b814-350305fff19f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-pss5l\" (UID: \"8617140c-972f-4ec0-b814-350305fff19f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.608399 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9brj\" (UniqueName: \"kubernetes.io/projected/8617140c-972f-4ec0-b814-350305fff19f-kube-api-access-t9brj\") pod \"ovnkube-control-plane-749d76644c-pss5l\" (UID: \"8617140c-972f-4ec0-b814-350305fff19f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.608445 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8617140c-972f-4ec0-b814-350305fff19f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-pss5l\" (UID: \"8617140c-972f-4ec0-b814-350305fff19f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.610385 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8617140c-972f-4ec0-b814-350305fff19f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-pss5l\" (UID: \"8617140c-972f-4ec0-b814-350305fff19f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.610524 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8617140c-972f-4ec0-b814-350305fff19f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-pss5l\" (UID: \"8617140c-972f-4ec0-b814-350305fff19f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.613969 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8617140c-972f-4ec0-b814-350305fff19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pss5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:54Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.618212 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8617140c-972f-4ec0-b814-350305fff19f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-pss5l\" (UID: \"8617140c-972f-4ec0-b814-350305fff19f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.629810 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9brj\" (UniqueName: \"kubernetes.io/projected/8617140c-972f-4ec0-b814-350305fff19f-kube-api-access-t9brj\") pod \"ovnkube-control-plane-749d76644c-pss5l\" (UID: \"8617140c-972f-4ec0-b814-350305fff19f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.633707 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:54Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.654150 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.654216 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.654235 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.654262 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.654279 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:54Z","lastTransitionTime":"2025-12-05T01:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.662400 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531a3fe35c53fc02d57c85ec09e66ca43962c444bf7a59abb676020240ed91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:54Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.680850 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a14f02-7a4f-422f-a8c6-c611162c1bd3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407f3b7963c007e3baf021afe67f7c9836422245e3a9e89a2277ec1d98ff27d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1a86cd848696e6b8d7296270d5d8b58ffca056fe95db283a4f494c87684ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5844498d95e908f579a1ffcd6d0ba838c470c12cad2dfd89e8d4df5f7931cfbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6afe31db8331e521d8c92b070693d2997a7a26483bf351586b38ebb5869b53b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:54Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.696332 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:54Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.718851 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9227d73d6693c98e44c90c0ba1688a5c43772643b063f695014f97de72f812a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9227d73d6693c98e44c90c0ba1688a5c43772643b063f695014f97de72f812a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T01:08:53Z\\\",\\\"message\\\":\\\"ns/factory.go:140\\\\nI1205 01:08:53.172650 6266 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1205 01:08:53.172759 6266 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1205 01:08:53.173533 6266 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1205 01:08:53.173553 6266 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1205 01:08:53.173595 6266 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1205 01:08:53.173653 6266 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1205 01:08:53.173775 6266 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1205 01:08:53.173793 6266 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1205 01:08:53.173850 6266 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 01:08:53.173881 6266 factory.go:656] Stopping watch factory\\\\nI1205 01:08:53.173904 6266 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1205 01:08:53.173919 6266 handler.go:208] Removed *v1.Node event handler 7\\\\nI1205 01:08:53.173938 6266 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:54Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.732057 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vlg2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccea29b1-256e-417e-985e-ca477e0b8d7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5535c15a27a700aeea04a7cd4a8bec6709fe1de66fc4dc8e9edb9d7d00900ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vlg2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:54Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.749160 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:54Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.750909 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.757765 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.757812 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.757825 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.757844 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.757856 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:54Z","lastTransitionTime":"2025-12-05T01:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.776371 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:54Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.788629 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:54Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.861136 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.861183 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.861196 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.861216 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.861230 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:54Z","lastTransitionTime":"2025-12-05T01:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.930035 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:08:54 crc kubenswrapper[4990]: E1205 01:08:54.930191 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.930818 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:08:54 crc kubenswrapper[4990]: E1205 01:08:54.930988 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.964518 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.964585 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.964604 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.964636 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:54 crc kubenswrapper[4990]: I1205 01:08:54.964660 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:54Z","lastTransitionTime":"2025-12-05T01:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.040644 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.040698 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.040708 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.040727 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.040739 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:55Z","lastTransitionTime":"2025-12-05T01:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:55 crc kubenswrapper[4990]: E1205 01:08:55.052292 4990 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2415bd45-5145-44bb-b5a4-8197e19c19f6\\\",\\\"systemUUID\\\":\\\"ce964c17-1cf3-4471-84ac-c2fc1079c2f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:55Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.056239 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.056292 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.056310 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.056332 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.056345 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:55Z","lastTransitionTime":"2025-12-05T01:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:55 crc kubenswrapper[4990]: E1205 01:08:55.070416 4990 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2415bd45-5145-44bb-b5a4-8197e19c19f6\\\",\\\"systemUUID\\\":\\\"ce964c17-1cf3-4471-84ac-c2fc1079c2f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:55Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.074861 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.074901 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.074910 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.074928 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.074939 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:55Z","lastTransitionTime":"2025-12-05T01:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:55 crc kubenswrapper[4990]: E1205 01:08:55.090408 4990 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2415bd45-5145-44bb-b5a4-8197e19c19f6\\\",\\\"systemUUID\\\":\\\"ce964c17-1cf3-4471-84ac-c2fc1079c2f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:55Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.094456 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.094572 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.094602 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.094636 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.094661 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:55Z","lastTransitionTime":"2025-12-05T01:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:55 crc kubenswrapper[4990]: E1205 01:08:55.109789 4990 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2415bd45-5145-44bb-b5a4-8197e19c19f6\\\",\\\"systemUUID\\\":\\\"ce964c17-1cf3-4471-84ac-c2fc1079c2f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:55Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.113973 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.114016 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.114028 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.114050 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.114066 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:55Z","lastTransitionTime":"2025-12-05T01:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:55 crc kubenswrapper[4990]: E1205 01:08:55.130106 4990 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2415bd45-5145-44bb-b5a4-8197e19c19f6\\\",\\\"systemUUID\\\":\\\"ce964c17-1cf3-4471-84ac-c2fc1079c2f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:55Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:55 crc kubenswrapper[4990]: E1205 01:08:55.130254 4990 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.132122 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.132160 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.132188 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.132214 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.132229 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:55Z","lastTransitionTime":"2025-12-05T01:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.199965 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4w6g9_3eeec70d-1c5c-434e-90bc-95620458151c/ovnkube-controller/0.log" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.202917 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" event={"ID":"3eeec70d-1c5c-434e-90bc-95620458151c","Type":"ContainerStarted","Data":"71fe8f31557822a5c2424ffae17ccc7664d6379d1099661bdbfacb87d9db7f27"} Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.203055 4990 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.205105 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" event={"ID":"8617140c-972f-4ec0-b814-350305fff19f","Type":"ContainerStarted","Data":"3bd7abb19014a2e2eb714140cf0544b6dda5e8e729c0762950b39733a53a8981"} Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.205147 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" event={"ID":"8617140c-972f-4ec0-b814-350305fff19f","Type":"ContainerStarted","Data":"1aae7962052d9ec4ac34955589069935d307053d50fb2373d9357f15e8bd8794"} Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.232255 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe8f31557822a5c2424ffae17ccc7664d6379d1099661bdbfacb87d9db7f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9227d73d6693c98e44c90c0ba1688a5c43772643b063f695014f97de72f812a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T01:08:53Z\\\",\\\"message\\\":\\\"ns/factory.go:140\\\\nI1205 01:08:53.172650 6266 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1205 01:08:53.172759 6266 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1205 01:08:53.173533 6266 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1205 01:08:53.173553 6266 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1205 01:08:53.173595 6266 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1205 01:08:53.173653 6266 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1205 01:08:53.173775 6266 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1205 01:08:53.173793 6266 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1205 01:08:53.173850 6266 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 01:08:53.173881 6266 factory.go:656] Stopping watch factory\\\\nI1205 01:08:53.173904 6266 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1205 01:08:53.173919 6266 handler.go:208] Removed *v1.Node event handler 7\\\\nI1205 01:08:53.173938 6266 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:55Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.238403 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.238446 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.238462 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.238525 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.238545 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:55Z","lastTransitionTime":"2025-12-05T01:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.248511 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vlg2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccea29b1-256e-417e-985e-ca477e0b8d7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5535c15a27a700aeea04a7cd4a8bec6709fe1de66fc4dc8e9edb9d7d00900ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vlg2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:55Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.264245 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a14f02-7a4f-422f-a8c6-c611162c1bd3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407f3b7963c007e3baf021afe67f7c9836422245e3a9e89a2277ec1d98ff27d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1a86cd848696e6b8d7296270d5d8b58ffca056fe95db283a4f494c87684ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5844498d95e908f579a1ffcd6d0ba838c470c12cad2dfd89e8d4df5f7931cfbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6afe31db8331e521d8c92b070693d2997a7a26483bf351586b38ebb5869b53b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:55Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.279508 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:55Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.305962 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:55Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.333722 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:55Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.341509 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.341757 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.341851 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.341924 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.341990 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:55Z","lastTransitionTime":"2025-12-05T01:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.353470 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:55Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.378389 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:55Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.394076 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:55Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.417277 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:55Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.437665 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:55Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.445119 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.445354 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.445450 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.445574 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.445664 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:55Z","lastTransitionTime":"2025-12-05T01:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.457003 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:55Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.469960 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:55Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.486401 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531a3fe35c53fc02d57c85ec09e66ca43962c444bf7a59abb676020240ed91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:55Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.499409 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8617140c-972f-4ec0-b814-350305fff19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pss5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:55Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.549099 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.549167 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.549190 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.549218 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.549235 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:55Z","lastTransitionTime":"2025-12-05T01:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.651800 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.651840 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.651851 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.651871 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.651888 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:55Z","lastTransitionTime":"2025-12-05T01:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.755428 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.755471 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.755500 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.755518 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.755531 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:55Z","lastTransitionTime":"2025-12-05T01:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.858606 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.858658 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.858669 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.858688 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.858703 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:55Z","lastTransitionTime":"2025-12-05T01:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.929973 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:08:55 crc kubenswrapper[4990]: E1205 01:08:55.930123 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.961408 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.961718 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.961830 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.961902 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:55 crc kubenswrapper[4990]: I1205 01:08:55.961994 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:55Z","lastTransitionTime":"2025-12-05T01:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.064975 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.065033 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.065050 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.065074 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.065090 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:56Z","lastTransitionTime":"2025-12-05T01:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.169310 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.169404 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.169430 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.169462 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.169515 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:56Z","lastTransitionTime":"2025-12-05T01:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.212343 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" event={"ID":"8617140c-972f-4ec0-b814-350305fff19f","Type":"ContainerStarted","Data":"5313c74541ba3388246397e45fa492ca200b8897142d8a648b0c34c7c576a559"} Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.215439 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4w6g9_3eeec70d-1c5c-434e-90bc-95620458151c/ovnkube-controller/1.log" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.216644 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4w6g9_3eeec70d-1c5c-434e-90bc-95620458151c/ovnkube-controller/0.log" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.221987 4990 generic.go:334] "Generic (PLEG): container finished" podID="3eeec70d-1c5c-434e-90bc-95620458151c" containerID="71fe8f31557822a5c2424ffae17ccc7664d6379d1099661bdbfacb87d9db7f27" exitCode=1 Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.222064 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" event={"ID":"3eeec70d-1c5c-434e-90bc-95620458151c","Type":"ContainerDied","Data":"71fe8f31557822a5c2424ffae17ccc7664d6379d1099661bdbfacb87d9db7f27"} Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.222172 4990 scope.go:117] "RemoveContainer" containerID="9227d73d6693c98e44c90c0ba1688a5c43772643b063f695014f97de72f812a6" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.223367 4990 scope.go:117] "RemoveContainer" containerID="71fe8f31557822a5c2424ffae17ccc7664d6379d1099661bdbfacb87d9db7f27" Dec 05 01:08:56 crc kubenswrapper[4990]: E1205 01:08:56.223692 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-4w6g9_openshift-ovn-kubernetes(3eeec70d-1c5c-434e-90bc-95620458151c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.239869 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.259225 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.273170 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.273257 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.273274 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.273312 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.273334 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:56Z","lastTransitionTime":"2025-12-05T01:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.287449 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.298056 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-bxb6s"] Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.298952 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:08:56 crc kubenswrapper[4990]: E1205 01:08:56.299071 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.309286 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.327959 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7760172e-33aa-4de9-bd10-6a92c0851c6e-metrics-certs\") pod \"network-metrics-daemon-bxb6s\" (UID: \"7760172e-33aa-4de9-bd10-6a92c0851c6e\") " pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.328108 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqpsb\" (UniqueName: \"kubernetes.io/projected/7760172e-33aa-4de9-bd10-6a92c0851c6e-kube-api-access-bqpsb\") pod \"network-metrics-daemon-bxb6s\" (UID: \"7760172e-33aa-4de9-bd10-6a92c0851c6e\") " pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.331714 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.351701 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.376789 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.376978 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.377521 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.377540 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.377570 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.377589 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:56Z","lastTransitionTime":"2025-12-05T01:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.397621 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.418465 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.431602 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7760172e-33aa-4de9-bd10-6a92c0851c6e-metrics-certs\") pod \"network-metrics-daemon-bxb6s\" (UID: \"7760172e-33aa-4de9-bd10-6a92c0851c6e\") " pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.431700 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqpsb\" (UniqueName: \"kubernetes.io/projected/7760172e-33aa-4de9-bd10-6a92c0851c6e-kube-api-access-bqpsb\") pod \"network-metrics-daemon-bxb6s\" (UID: \"7760172e-33aa-4de9-bd10-6a92c0851c6e\") " pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:08:56 crc kubenswrapper[4990]: E1205 01:08:56.431993 4990 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 01:08:56 crc kubenswrapper[4990]: E1205 01:08:56.432192 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7760172e-33aa-4de9-bd10-6a92c0851c6e-metrics-certs podName:7760172e-33aa-4de9-bd10-6a92c0851c6e nodeName:}" failed. No retries permitted until 2025-12-05 01:08:56.932143799 +0000 UTC m=+35.308359340 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7760172e-33aa-4de9-bd10-6a92c0851c6e-metrics-certs") pod "network-metrics-daemon-bxb6s" (UID: "7760172e-33aa-4de9-bd10-6a92c0851c6e") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.438676 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531a3fe35c53fc02d57c85ec09e66ca43962c444bf7a59abb676020240ed91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.450857 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqpsb\" (UniqueName: \"kubernetes.io/projected/7760172e-33aa-4de9-bd10-6a92c0851c6e-kube-api-access-bqpsb\") pod \"network-metrics-daemon-bxb6s\" (UID: \"7760172e-33aa-4de9-bd10-6a92c0851c6e\") " pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.455232 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8617140c-972f-4ec0-b814-350305fff19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd7abb19014a2e2eb714140cf0544b6dda5e8e729c0762950b39733a53a8981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5313c74541ba3388246397e45fa492ca200b8897142d8a648b0c34c7c576a559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pss5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.478442 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe8f31557822a5c2424ffae17ccc7664d6379d1099661bdbfacb87d9db7f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9227d73d6693c98e44c90c0ba1688a5c43772643b063f695014f97de72f812a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T01:08:53Z\\\",\\\"message\\\":\\\"ns/factory.go:140\\\\nI1205 01:08:53.172650 6266 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1205 01:08:53.172759 6266 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1205 01:08:53.173533 6266 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1205 01:08:53.173553 6266 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1205 01:08:53.173595 6266 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1205 01:08:53.173653 6266 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1205 01:08:53.173775 6266 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1205 01:08:53.173793 6266 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1205 01:08:53.173850 6266 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 01:08:53.173881 6266 factory.go:656] Stopping watch factory\\\\nI1205 01:08:53.173904 6266 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1205 01:08:53.173919 6266 handler.go:208] Removed *v1.Node event handler 7\\\\nI1205 01:08:53.173938 6266 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.480121 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.480155 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.480164 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.480181 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.480196 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:56Z","lastTransitionTime":"2025-12-05T01:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.496644 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vlg2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccea29b1-256e-417e-985e-ca477e0b8d7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5535c15a27a700aeea04a7cd4a8bec6709fe1de66fc4dc8e9edb9d7d00900ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vlg2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.516267 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a14f02-7a4f-422f-a8c6-c611162c1bd3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407f3b7963c007e3baf021afe67f7c9836422245e3a9e89a2277ec1d98ff27d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1a86cd848696e6b8d7296270d5d8b58ffca056fe95db283a4f494c87684ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5844498d95e908f579a1ffcd6d0ba838c470c12cad2dfd89e8d4df5f7931cfbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6afe31db8331e521d8c92b070693d2997a7a26483bf351586b38ebb5869b53b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.537990 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.554832 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8617140c-972f-4ec0-b814-350305fff19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd7abb19014a2e2eb714140cf0544b6dda5e8e729c0762950b39733a53a8981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5313c74541ba3388246397e45fa492ca200b8897142d8a648b0c34c7c576a559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pss5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.568867 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.582701 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.582744 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.582756 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.582778 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.582793 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:56Z","lastTransitionTime":"2025-12-05T01:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.589725 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531a3fe35c53fc02d57c85ec09e66ca43962c444bf7a59abb676020240ed91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.604233 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bxb6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7760172e-33aa-4de9-bd10-6a92c0851c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqpsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqpsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bxb6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.619767 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a14f02-7a4f-422f-a8c6-c611162c1bd3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407f3b7963c007e3baf021afe67f7c9836422245e3a9e89a2277ec1d98ff27d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1a86cd848696e6b8d7296270d5d8b58ffca056fe95db283a4f494c87684ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5844498d95e908f579a1ffcd6d0ba838c470c12cad2dfd89e8d4df5f7931cfbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6afe31db8331e521d8c92b070693d2997a7a26483bf351586b38ebb5869b53b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.639446 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.673209 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe8f31557822a5c2424ffae17ccc7664d6379d1099661bdbfacb87d9db7f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9227d73d6693c98e44c90c0ba1688a5c43772643b063f695014f97de72f812a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T01:08:53Z\\\",\\\"message\\\":\\\"ns/factory.go:140\\\\nI1205 01:08:53.172650 6266 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1205 01:08:53.172759 6266 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1205 01:08:53.173533 6266 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1205 01:08:53.173553 6266 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1205 01:08:53.173595 6266 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1205 01:08:53.173653 6266 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1205 01:08:53.173775 6266 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1205 01:08:53.173793 6266 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1205 01:08:53.173850 6266 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 01:08:53.173881 6266 factory.go:656] Stopping watch factory\\\\nI1205 01:08:53.173904 6266 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1205 01:08:53.173919 6266 handler.go:208] Removed *v1.Node event handler 7\\\\nI1205 01:08:53.173938 6266 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe8f31557822a5c2424ffae17ccc7664d6379d1099661bdbfacb87d9db7f27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"message\\\":\\\"ervices.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"192.168.126.11\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string(nil), Groups:[]string(nil)}}\\\\nI1205 01:08:55.868261 6385 services_controller.go:360] Finished syncing service kube-controller-manager on namespace openshift-kube-controller-manager for network=default : 1.064481ms\\\\nI1205 01:08:55.868257 6385 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver-operator/metrics]} name:Service_openshift-kube-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.109:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1f62a432-33b9-495d-83b2-d1dbe6961325}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1205 01:08:55.868302 6385 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-controllers for network=default\\\\nF1205 01:08:55.868305 6385 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.685360 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.685410 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.685424 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.685447 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.685464 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:56Z","lastTransitionTime":"2025-12-05T01:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.685735 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vlg2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccea29b1-256e-417e-985e-ca477e0b8d7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5535c15a27a700aeea04a7cd4a8bec6709fe1de66fc4dc8e9edb9d7d00900ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vlg2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.701242 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.717994 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.732717 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.735044 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.735197 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:08:56 crc kubenswrapper[4990]: E1205 01:08:56.735242 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:09:12.735214703 +0000 UTC m=+51.111430084 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:08:56 crc kubenswrapper[4990]: E1205 01:08:56.735352 4990 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 01:08:56 crc kubenswrapper[4990]: E1205 01:08:56.735465 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 01:09:12.73544041 +0000 UTC m=+51.111655791 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.735325 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:08:56 crc kubenswrapper[4990]: E1205 01:08:56.735522 4990 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 01:08:56 crc kubenswrapper[4990]: E1205 01:08:56.735810 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 01:09:12.735753759 +0000 UTC m=+51.111969160 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.746426 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.762233 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.778720 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.788079 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.788125 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.788140 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.788161 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.788176 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:56Z","lastTransitionTime":"2025-12-05T01:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.794769 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.811386 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.834803 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.836158 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.836211 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:08:56 crc kubenswrapper[4990]: E1205 01:08:56.836394 4990 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 01:08:56 crc kubenswrapper[4990]: E1205 01:08:56.836512 4990 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 01:08:56 crc kubenswrapper[4990]: E1205 01:08:56.836394 4990 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 01:08:56 crc kubenswrapper[4990]: E1205 01:08:56.836541 4990 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 01:08:56 crc kubenswrapper[4990]: E1205 01:08:56.836558 4990 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 01:08:56 crc kubenswrapper[4990]: E1205 01:08:56.836574 4990 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 01:08:56 crc kubenswrapper[4990]: E1205 01:08:56.836620 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 01:09:12.836597235 +0000 UTC m=+51.212812606 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 01:08:56 crc kubenswrapper[4990]: E1205 01:08:56.836647 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 01:09:12.836636227 +0000 UTC m=+51.212851598 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.850456 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.865233 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.882939 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.890696 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.890730 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.890743 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.890765 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.890779 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:56Z","lastTransitionTime":"2025-12-05T01:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.901540 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.915532 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.930521 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.930560 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:08:56 crc kubenswrapper[4990]: E1205 01:08:56.930764 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:08:56 crc kubenswrapper[4990]: E1205 01:08:56.930909 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.932536 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8617140c-972f-4ec0-b814-350305fff19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd7abb19014a2e2eb714140cf0544b6dda5e8e729c0762950b39733a53a8981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5313c74541ba3388246397e45fa492ca200b8897142d8a648b0c34c7c576a559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pss5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.936939 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7760172e-33aa-4de9-bd10-6a92c0851c6e-metrics-certs\") pod \"network-metrics-daemon-bxb6s\" (UID: \"7760172e-33aa-4de9-bd10-6a92c0851c6e\") " pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:08:56 crc kubenswrapper[4990]: E1205 01:08:56.937236 4990 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 01:08:56 crc kubenswrapper[4990]: E1205 01:08:56.937409 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7760172e-33aa-4de9-bd10-6a92c0851c6e-metrics-certs podName:7760172e-33aa-4de9-bd10-6a92c0851c6e nodeName:}" failed. No retries permitted until 2025-12-05 01:08:57.93737028 +0000 UTC m=+36.313585831 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7760172e-33aa-4de9-bd10-6a92c0851c6e-metrics-certs") pod "network-metrics-daemon-bxb6s" (UID: "7760172e-33aa-4de9-bd10-6a92c0851c6e") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.949262 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.970136 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531a3fe35c53fc02d57c85ec09e66ca43962c444bf7a59abb676020240ed91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.989038 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bxb6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7760172e-33aa-4de9-bd10-6a92c0851c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqpsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqpsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bxb6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:56Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.993992 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.994059 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.994074 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.994098 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:56 crc kubenswrapper[4990]: I1205 01:08:56.994113 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:56Z","lastTransitionTime":"2025-12-05T01:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.005850 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a14f02-7a4f-422f-a8c6-c611162c1bd3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407f3b7963c007e3baf021afe67f7c9836422245e3a9e89a2277ec1d98ff27d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1a86cd848696e6b8d7296270d5d8b58ffca056fe95db283a4f494c87684ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5844498d95e908f579a1ffcd6d0ba838c470c12cad2dfd89e8d4df5f7931cfbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6afe31db8331e521d8c92b070693d2997a7a26483bf351586b38ebb5869b53b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:57Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.020529 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:57Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.042715 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe8f31557822a5c2424ffae17ccc7664d6379d1099661bdbfacb87d9db7f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9227d73d6693c98e44c90c0ba1688a5c43772643b063f695014f97de72f812a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T01:08:53Z\\\",\\\"message\\\":\\\"ns/factory.go:140\\\\nI1205 01:08:53.172650 6266 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1205 01:08:53.172759 6266 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1205 01:08:53.173533 6266 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1205 01:08:53.173553 6266 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1205 01:08:53.173595 6266 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1205 01:08:53.173653 6266 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1205 01:08:53.173775 6266 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1205 01:08:53.173793 6266 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1205 01:08:53.173850 6266 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 01:08:53.173881 6266 factory.go:656] Stopping watch factory\\\\nI1205 01:08:53.173904 6266 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1205 01:08:53.173919 6266 handler.go:208] Removed *v1.Node event handler 7\\\\nI1205 01:08:53.173938 6266 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe8f31557822a5c2424ffae17ccc7664d6379d1099661bdbfacb87d9db7f27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"message\\\":\\\"ervices.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"192.168.126.11\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string(nil), Groups:[]string(nil)}}\\\\nI1205 01:08:55.868261 6385 services_controller.go:360] Finished syncing service kube-controller-manager on namespace openshift-kube-controller-manager for network=default : 1.064481ms\\\\nI1205 01:08:55.868257 6385 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver-operator/metrics]} name:Service_openshift-kube-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.109:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1f62a432-33b9-495d-83b2-d1dbe6961325}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1205 01:08:55.868302 6385 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-controllers for network=default\\\\nF1205 01:08:55.868305 6385 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:57Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.057369 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vlg2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccea29b1-256e-417e-985e-ca477e0b8d7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5535c15a27a700aeea04a7cd4a8bec6709fe1de66fc4dc8e9edb9d7d00900ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vlg2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:57Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.072153 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:57Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.084734 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:57Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.096270 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:57Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.097210 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.097267 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.097315 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.097342 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.097363 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:57Z","lastTransitionTime":"2025-12-05T01:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.200568 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.200653 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.200672 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.200703 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.200723 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:57Z","lastTransitionTime":"2025-12-05T01:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.228746 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4w6g9_3eeec70d-1c5c-434e-90bc-95620458151c/ovnkube-controller/1.log" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.233971 4990 scope.go:117] "RemoveContainer" containerID="71fe8f31557822a5c2424ffae17ccc7664d6379d1099661bdbfacb87d9db7f27" Dec 05 01:08:57 crc kubenswrapper[4990]: E1205 01:08:57.234224 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-4w6g9_openshift-ovn-kubernetes(3eeec70d-1c5c-434e-90bc-95620458151c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.254251 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:57Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.276728 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531a3fe35c53fc02d57c85ec09e66ca43962c444bf7a59abb676020240ed91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:57Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.293228 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8617140c-972f-4ec0-b814-350305fff19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd7abb19014a2e2eb714140cf0544b6dda5e8e729c0762950b39733a53a8981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5313c74541ba3388246397e45fa492ca200b8897142d8a648b0c34c7c576a559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pss5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:57Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.303864 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.304096 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.304111 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.304134 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.304150 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:57Z","lastTransitionTime":"2025-12-05T01:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.314836 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:57Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.334991 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe8f31557822a5c2424ffae17ccc7664d6379d1099661bdbfacb87d9db7f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe8f31557822a5c2424ffae17ccc7664d6379d1099661bdbfacb87d9db7f27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"message\\\":\\\"ervices.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"192.168.126.11\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string(nil), Groups:[]string(nil)}}\\\\nI1205 01:08:55.868261 6385 services_controller.go:360] Finished syncing service kube-controller-manager on namespace openshift-kube-controller-manager for network=default : 1.064481ms\\\\nI1205 01:08:55.868257 6385 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver-operator/metrics]} name:Service_openshift-kube-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.109:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1f62a432-33b9-495d-83b2-d1dbe6961325}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1205 01:08:55.868302 6385 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-controllers for network=default\\\\nF1205 01:08:55.868305 6385 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-4w6g9_openshift-ovn-kubernetes(3eeec70d-1c5c-434e-90bc-95620458151c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:57Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.352314 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vlg2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccea29b1-256e-417e-985e-ca477e0b8d7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5535c15a27a700aeea04a7cd4a8bec6709fe1de66fc4dc8e9edb9d7d00900ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vlg2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:57Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.366848 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bxb6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7760172e-33aa-4de9-bd10-6a92c0851c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqpsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqpsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bxb6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:57Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.389897 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a14f02-7a4f-422f-a8c6-c611162c1bd3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407f3b7963c007e3baf021afe67f7c9836422245e3a9e89a2277ec1d98ff27d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1a86cd848696e6b8d7296270d5d8b58ffca056fe95db283a4f494c87684ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5844498d95e908f579a1ffcd6d0ba838c470c12cad2dfd89e8d4df5f7931cfbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6afe31db8331e521d8c92b070693d2997a7a26483bf351586b38ebb5869b53b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:57Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.407579 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.407652 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.407671 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.407701 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.407722 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:57Z","lastTransitionTime":"2025-12-05T01:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.408719 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:57Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.428598 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:57Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.441813 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:57Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.463065 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:57Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.485617 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:57Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.500827 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:57Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.510262 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.510321 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.510333 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.510352 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.510364 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:57Z","lastTransitionTime":"2025-12-05T01:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.516071 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:57Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.532336 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:08:57Z is after 2025-08-24T17:21:41Z" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.613381 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.613418 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.613428 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.613447 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.613460 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:57Z","lastTransitionTime":"2025-12-05T01:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.723726 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.723809 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.723829 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.723857 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.723879 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:57Z","lastTransitionTime":"2025-12-05T01:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.827926 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.828027 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.828048 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.828108 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.828128 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:57Z","lastTransitionTime":"2025-12-05T01:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.929612 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.929813 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:08:57 crc kubenswrapper[4990]: E1205 01:08:57.929949 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:08:57 crc kubenswrapper[4990]: E1205 01:08:57.930122 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.932143 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.932216 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.932240 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.932284 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.932308 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:57Z","lastTransitionTime":"2025-12-05T01:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:57 crc kubenswrapper[4990]: I1205 01:08:57.948924 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7760172e-33aa-4de9-bd10-6a92c0851c6e-metrics-certs\") pod \"network-metrics-daemon-bxb6s\" (UID: \"7760172e-33aa-4de9-bd10-6a92c0851c6e\") " pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:08:57 crc kubenswrapper[4990]: E1205 01:08:57.949199 4990 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 01:08:57 crc kubenswrapper[4990]: E1205 01:08:57.949354 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7760172e-33aa-4de9-bd10-6a92c0851c6e-metrics-certs podName:7760172e-33aa-4de9-bd10-6a92c0851c6e nodeName:}" failed. No retries permitted until 2025-12-05 01:08:59.949322468 +0000 UTC m=+38.325538049 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7760172e-33aa-4de9-bd10-6a92c0851c6e-metrics-certs") pod "network-metrics-daemon-bxb6s" (UID: "7760172e-33aa-4de9-bd10-6a92c0851c6e") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.036381 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.036469 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.036541 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.036578 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.036602 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:58Z","lastTransitionTime":"2025-12-05T01:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.111150 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.140364 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.140421 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.140461 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.140535 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.140563 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:58Z","lastTransitionTime":"2025-12-05T01:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.238072 4990 scope.go:117] "RemoveContainer" containerID="71fe8f31557822a5c2424ffae17ccc7664d6379d1099661bdbfacb87d9db7f27" Dec 05 01:08:58 crc kubenswrapper[4990]: E1205 01:08:58.238401 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-4w6g9_openshift-ovn-kubernetes(3eeec70d-1c5c-434e-90bc-95620458151c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.243523 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.243590 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.243614 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.243641 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.243661 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:58Z","lastTransitionTime":"2025-12-05T01:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.347298 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.347339 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.347351 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.347371 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.347386 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:58Z","lastTransitionTime":"2025-12-05T01:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.450964 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.451017 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.451026 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.451043 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.451057 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:58Z","lastTransitionTime":"2025-12-05T01:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.554553 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.554602 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.554620 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.554649 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.554673 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:58Z","lastTransitionTime":"2025-12-05T01:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.658243 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.658352 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.658375 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.658403 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.658426 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:58Z","lastTransitionTime":"2025-12-05T01:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.761461 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.761574 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.761599 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.761679 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.761700 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:58Z","lastTransitionTime":"2025-12-05T01:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.865187 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.865259 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.865277 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.865306 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.865328 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:58Z","lastTransitionTime":"2025-12-05T01:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.929901 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.929901 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:08:58 crc kubenswrapper[4990]: E1205 01:08:58.930140 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:08:58 crc kubenswrapper[4990]: E1205 01:08:58.930219 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.969007 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.969089 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.969107 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.969135 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:58 crc kubenswrapper[4990]: I1205 01:08:58.969159 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:58Z","lastTransitionTime":"2025-12-05T01:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.072435 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.072508 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.072520 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.072541 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.072553 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:59Z","lastTransitionTime":"2025-12-05T01:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.176727 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.176798 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.176827 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.176849 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.176861 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:59Z","lastTransitionTime":"2025-12-05T01:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.279852 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.279916 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.279928 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.279944 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.279956 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:59Z","lastTransitionTime":"2025-12-05T01:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.383192 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.383243 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.383252 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.383271 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.383282 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:59Z","lastTransitionTime":"2025-12-05T01:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.486182 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.486231 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.486240 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.486260 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.486272 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:59Z","lastTransitionTime":"2025-12-05T01:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.590458 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.590541 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.590559 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.590580 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.590594 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:59Z","lastTransitionTime":"2025-12-05T01:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.694802 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.694869 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.694892 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.694914 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.694927 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:59Z","lastTransitionTime":"2025-12-05T01:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.802475 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.802614 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.802631 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.802657 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.802680 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:59Z","lastTransitionTime":"2025-12-05T01:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.906913 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.906999 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.907019 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.907048 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.907067 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:08:59Z","lastTransitionTime":"2025-12-05T01:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.930021 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.930081 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:08:59 crc kubenswrapper[4990]: E1205 01:08:59.930247 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:08:59 crc kubenswrapper[4990]: E1205 01:08:59.930468 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:08:59 crc kubenswrapper[4990]: I1205 01:08:59.973427 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7760172e-33aa-4de9-bd10-6a92c0851c6e-metrics-certs\") pod \"network-metrics-daemon-bxb6s\" (UID: \"7760172e-33aa-4de9-bd10-6a92c0851c6e\") " pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:08:59 crc kubenswrapper[4990]: E1205 01:08:59.973673 4990 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 01:08:59 crc kubenswrapper[4990]: E1205 01:08:59.973778 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7760172e-33aa-4de9-bd10-6a92c0851c6e-metrics-certs podName:7760172e-33aa-4de9-bd10-6a92c0851c6e nodeName:}" failed. No retries permitted until 2025-12-05 01:09:03.973754529 +0000 UTC m=+42.349969960 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7760172e-33aa-4de9-bd10-6a92c0851c6e-metrics-certs") pod "network-metrics-daemon-bxb6s" (UID: "7760172e-33aa-4de9-bd10-6a92c0851c6e") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.009817 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.009876 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.009895 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.009919 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.009938 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:00Z","lastTransitionTime":"2025-12-05T01:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.113307 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.113380 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.113402 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.113428 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.113448 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:00Z","lastTransitionTime":"2025-12-05T01:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.216236 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.216291 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.216300 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.216317 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.216330 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:00Z","lastTransitionTime":"2025-12-05T01:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.319621 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.319720 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.319742 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.319778 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.319796 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:00Z","lastTransitionTime":"2025-12-05T01:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.423875 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.423930 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.423942 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.423959 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.423970 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:00Z","lastTransitionTime":"2025-12-05T01:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.526940 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.526997 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.527014 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.527040 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.527051 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:00Z","lastTransitionTime":"2025-12-05T01:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.629969 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.630067 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.630094 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.630129 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.630147 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:00Z","lastTransitionTime":"2025-12-05T01:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.732880 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.732939 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.732950 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.732966 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.732976 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:00Z","lastTransitionTime":"2025-12-05T01:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.835634 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.835692 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.835709 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.835729 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.835743 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:00Z","lastTransitionTime":"2025-12-05T01:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.930203 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.930218 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:09:00 crc kubenswrapper[4990]: E1205 01:09:00.930392 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:09:00 crc kubenswrapper[4990]: E1205 01:09:00.930555 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.938692 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.938725 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.938738 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.938839 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:00 crc kubenswrapper[4990]: I1205 01:09:00.938851 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:00Z","lastTransitionTime":"2025-12-05T01:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.041437 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.041563 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.041594 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.041634 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.041662 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:01Z","lastTransitionTime":"2025-12-05T01:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.144627 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.144679 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.144694 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.144728 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.144742 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:01Z","lastTransitionTime":"2025-12-05T01:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.247620 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.247931 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.248048 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.248217 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.248312 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:01Z","lastTransitionTime":"2025-12-05T01:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.351347 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.351382 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.351392 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.351407 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.351416 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:01Z","lastTransitionTime":"2025-12-05T01:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.454440 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.454551 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.454596 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.454615 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.454627 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:01Z","lastTransitionTime":"2025-12-05T01:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.557320 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.557377 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.557394 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.557421 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.557438 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:01Z","lastTransitionTime":"2025-12-05T01:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.660981 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.661046 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.661070 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.661104 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.661126 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:01Z","lastTransitionTime":"2025-12-05T01:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.764374 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.764534 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.764567 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.764594 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.764612 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:01Z","lastTransitionTime":"2025-12-05T01:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.868579 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.868757 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.868829 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.868867 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.868898 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:01Z","lastTransitionTime":"2025-12-05T01:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.929454 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.929541 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:09:01 crc kubenswrapper[4990]: E1205 01:09:01.930203 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:09:01 crc kubenswrapper[4990]: E1205 01:09:01.930383 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.948506 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a14f02-7a4f-422f-a8c6-c611162c1bd3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407f3b7963c007e3baf021afe67f7c9836422245e3a9e89a2277ec1d98ff27d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1a86cd848696e6b8d7296270d5d8b58ffca056fe95db283a4f494c87684ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5844498d95e908f579a1ffcd6d0ba838c470c12cad2dfd89e8d4df5f7931cfbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6afe31db8331e521d8c92b070693d2997a7a26483bf351586b38ebb5869b53b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:01Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.968097 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:01Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.972018 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.972082 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.972098 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.972123 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.972140 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:01Z","lastTransitionTime":"2025-12-05T01:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:01 crc kubenswrapper[4990]: I1205 01:09:01.998329 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe8f31557822a5c2424ffae17ccc7664d6379d1099661bdbfacb87d9db7f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe8f31557822a5c2424ffae17ccc7664d6379d1099661bdbfacb87d9db7f27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"message\\\":\\\"ervices.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"192.168.126.11\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string(nil), Groups:[]string(nil)}}\\\\nI1205 01:08:55.868261 6385 services_controller.go:360] Finished syncing service kube-controller-manager on namespace openshift-kube-controller-manager for network=default : 1.064481ms\\\\nI1205 01:08:55.868257 6385 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver-operator/metrics]} name:Service_openshift-kube-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.109:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1f62a432-33b9-495d-83b2-d1dbe6961325}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1205 01:08:55.868302 6385 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-controllers for network=default\\\\nF1205 01:08:55.868305 6385 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-4w6g9_openshift-ovn-kubernetes(3eeec70d-1c5c-434e-90bc-95620458151c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:01Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.012778 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vlg2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccea29b1-256e-417e-985e-ca477e0b8d7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5535c15a27a700aeea04a7cd4a8bec6709fe1de66fc4dc8e9edb9d7d00900ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vlg2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:02Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.026638 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bxb6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7760172e-33aa-4de9-bd10-6a92c0851c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqpsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqpsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bxb6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:02Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.044672 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:02Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.060600 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:02Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.071281 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:02Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.074625 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.074652 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.074661 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.074675 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.074685 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:02Z","lastTransitionTime":"2025-12-05T01:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.084260 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:02Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.097165 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:02Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.110261 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:02Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.122244 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:02Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.134702 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:02Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.149549 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:02Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.164781 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531a3fe35c53fc02d57c85ec09e66ca43962c444bf7a59abb676020240ed91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:02Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.177419 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8617140c-972f-4ec0-b814-350305fff19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd7abb19014a2e2eb714140cf0544b6dda5e8e729c0762950b39733a53a8981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5313c74541ba3388246397e45fa492ca200b8897142d8a648b0c34c7c576a559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pss5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:02Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.177514 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.177616 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.177632 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.177658 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.177672 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:02Z","lastTransitionTime":"2025-12-05T01:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.280623 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.280663 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.280674 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.280691 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.280704 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:02Z","lastTransitionTime":"2025-12-05T01:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.387685 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.387786 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.387811 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.387847 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.387872 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:02Z","lastTransitionTime":"2025-12-05T01:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.491333 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.491395 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.491415 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.491441 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.491460 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:02Z","lastTransitionTime":"2025-12-05T01:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.595306 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.595363 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.595382 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.595409 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.595427 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:02Z","lastTransitionTime":"2025-12-05T01:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.698150 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.698214 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.698232 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.698259 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.698282 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:02Z","lastTransitionTime":"2025-12-05T01:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.800872 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.800935 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.800951 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.800971 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.800985 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:02Z","lastTransitionTime":"2025-12-05T01:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.903551 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.903596 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.903610 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.903628 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.903640 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:02Z","lastTransitionTime":"2025-12-05T01:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.930045 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:09:02 crc kubenswrapper[4990]: E1205 01:09:02.930165 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:09:02 crc kubenswrapper[4990]: I1205 01:09:02.930044 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:09:02 crc kubenswrapper[4990]: E1205 01:09:02.930265 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.006257 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.006300 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.006316 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.006334 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.006345 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:03Z","lastTransitionTime":"2025-12-05T01:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.108456 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.108539 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.108553 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.108573 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.108584 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:03Z","lastTransitionTime":"2025-12-05T01:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.210856 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.210927 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.210947 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.210966 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.210977 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:03Z","lastTransitionTime":"2025-12-05T01:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.313098 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.313144 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.313155 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.313169 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.313180 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:03Z","lastTransitionTime":"2025-12-05T01:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.415776 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.415854 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.415879 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.415915 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.415934 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:03Z","lastTransitionTime":"2025-12-05T01:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.518181 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.518230 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.518246 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.518262 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.518272 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:03Z","lastTransitionTime":"2025-12-05T01:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.621533 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.621598 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.621608 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.621635 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.621646 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:03Z","lastTransitionTime":"2025-12-05T01:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.725049 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.725106 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.725119 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.725138 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.725150 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:03Z","lastTransitionTime":"2025-12-05T01:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.828120 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.828176 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.828190 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.828211 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.828222 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:03Z","lastTransitionTime":"2025-12-05T01:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.929517 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:09:03 crc kubenswrapper[4990]: E1205 01:09:03.929665 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.930254 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:09:03 crc kubenswrapper[4990]: E1205 01:09:03.930415 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.931344 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.931368 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.931378 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.931391 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:03 crc kubenswrapper[4990]: I1205 01:09:03.931401 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:03Z","lastTransitionTime":"2025-12-05T01:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.015513 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7760172e-33aa-4de9-bd10-6a92c0851c6e-metrics-certs\") pod \"network-metrics-daemon-bxb6s\" (UID: \"7760172e-33aa-4de9-bd10-6a92c0851c6e\") " pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:09:04 crc kubenswrapper[4990]: E1205 01:09:04.015698 4990 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 01:09:04 crc kubenswrapper[4990]: E1205 01:09:04.015767 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7760172e-33aa-4de9-bd10-6a92c0851c6e-metrics-certs podName:7760172e-33aa-4de9-bd10-6a92c0851c6e nodeName:}" failed. No retries permitted until 2025-12-05 01:09:12.015751634 +0000 UTC m=+50.391966995 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7760172e-33aa-4de9-bd10-6a92c0851c6e-metrics-certs") pod "network-metrics-daemon-bxb6s" (UID: "7760172e-33aa-4de9-bd10-6a92c0851c6e") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.035179 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.035258 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.035275 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.035305 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.035322 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:04Z","lastTransitionTime":"2025-12-05T01:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.138563 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.138615 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.138626 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.138641 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.138655 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:04Z","lastTransitionTime":"2025-12-05T01:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.242002 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.242076 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.242094 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.242123 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.242145 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:04Z","lastTransitionTime":"2025-12-05T01:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.346395 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.346445 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.346456 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.346507 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.346521 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:04Z","lastTransitionTime":"2025-12-05T01:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.448990 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.449043 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.449058 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.449080 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.449091 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:04Z","lastTransitionTime":"2025-12-05T01:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.552267 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.552352 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.552370 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.552396 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.552412 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:04Z","lastTransitionTime":"2025-12-05T01:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.655171 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.655204 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.655214 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.655231 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.655247 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:04Z","lastTransitionTime":"2025-12-05T01:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.758131 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.758203 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.758224 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.758253 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.758273 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:04Z","lastTransitionTime":"2025-12-05T01:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.861575 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.861649 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.861670 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.861700 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.861724 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:04Z","lastTransitionTime":"2025-12-05T01:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.929320 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.929519 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:09:04 crc kubenswrapper[4990]: E1205 01:09:04.929818 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:09:04 crc kubenswrapper[4990]: E1205 01:09:04.930069 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.964526 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.964569 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.964581 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.964607 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:04 crc kubenswrapper[4990]: I1205 01:09:04.964623 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:04Z","lastTransitionTime":"2025-12-05T01:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.067466 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.067801 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.067875 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.067987 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.068073 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:05Z","lastTransitionTime":"2025-12-05T01:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.170719 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.170784 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.170804 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.170831 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.170851 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:05Z","lastTransitionTime":"2025-12-05T01:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.274631 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.274693 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.274712 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.274739 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.274758 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:05Z","lastTransitionTime":"2025-12-05T01:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.377903 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.377960 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.377974 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.377996 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.378010 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:05Z","lastTransitionTime":"2025-12-05T01:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.481313 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.481385 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.481403 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.481432 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.481454 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:05Z","lastTransitionTime":"2025-12-05T01:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.523673 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.523738 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.523778 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.523817 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.523841 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:05Z","lastTransitionTime":"2025-12-05T01:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:05 crc kubenswrapper[4990]: E1205 01:09:05.546394 4990 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2415bd45-5145-44bb-b5a4-8197e19c19f6\\\",\\\"systemUUID\\\":\\\"ce964c17-1cf3-4471-84ac-c2fc1079c2f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:05Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.552988 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.553045 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.553067 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.553094 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.553111 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:05Z","lastTransitionTime":"2025-12-05T01:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:05 crc kubenswrapper[4990]: E1205 01:09:05.573662 4990 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2415bd45-5145-44bb-b5a4-8197e19c19f6\\\",\\\"systemUUID\\\":\\\"ce964c17-1cf3-4471-84ac-c2fc1079c2f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:05Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.582714 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.583018 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.583212 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.583525 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.583599 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:05Z","lastTransitionTime":"2025-12-05T01:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:05 crc kubenswrapper[4990]: E1205 01:09:05.609106 4990 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2415bd45-5145-44bb-b5a4-8197e19c19f6\\\",\\\"systemUUID\\\":\\\"ce964c17-1cf3-4471-84ac-c2fc1079c2f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:05Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.615801 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.615863 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.615882 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.615912 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.615933 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:05Z","lastTransitionTime":"2025-12-05T01:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:05 crc kubenswrapper[4990]: E1205 01:09:05.638279 4990 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2415bd45-5145-44bb-b5a4-8197e19c19f6\\\",\\\"systemUUID\\\":\\\"ce964c17-1cf3-4471-84ac-c2fc1079c2f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:05Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.643469 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.643546 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.643558 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.643577 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.643590 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:05Z","lastTransitionTime":"2025-12-05T01:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:05 crc kubenswrapper[4990]: E1205 01:09:05.668843 4990 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2415bd45-5145-44bb-b5a4-8197e19c19f6\\\",\\\"systemUUID\\\":\\\"ce964c17-1cf3-4471-84ac-c2fc1079c2f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:05Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:05 crc kubenswrapper[4990]: E1205 01:09:05.669193 4990 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.671252 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.671307 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.671319 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.671339 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.671352 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:05Z","lastTransitionTime":"2025-12-05T01:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.774814 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.774860 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.774873 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.774892 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.774905 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:05Z","lastTransitionTime":"2025-12-05T01:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.878201 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.878271 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.878289 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.878319 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.878341 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:05Z","lastTransitionTime":"2025-12-05T01:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.930059 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.930145 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:09:05 crc kubenswrapper[4990]: E1205 01:09:05.930274 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:09:05 crc kubenswrapper[4990]: E1205 01:09:05.930416 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.982674 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.982753 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.982772 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.982800 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:05 crc kubenswrapper[4990]: I1205 01:09:05.982825 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:05Z","lastTransitionTime":"2025-12-05T01:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.086112 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.086180 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.086204 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.086239 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.086262 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:06Z","lastTransitionTime":"2025-12-05T01:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.190166 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.190217 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.190227 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.190246 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.190257 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:06Z","lastTransitionTime":"2025-12-05T01:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.293656 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.293733 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.293756 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.293791 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.293812 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:06Z","lastTransitionTime":"2025-12-05T01:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.397598 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.397678 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.397702 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.397737 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.397760 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:06Z","lastTransitionTime":"2025-12-05T01:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.501610 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.501669 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.501681 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.501703 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.501719 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:06Z","lastTransitionTime":"2025-12-05T01:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.605031 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.605088 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.605107 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.605133 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.605154 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:06Z","lastTransitionTime":"2025-12-05T01:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.709550 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.709686 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.709710 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.709744 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.709766 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:06Z","lastTransitionTime":"2025-12-05T01:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.814638 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.814720 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.814737 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.814761 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.814787 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:06Z","lastTransitionTime":"2025-12-05T01:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.919158 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.919259 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.919285 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.919320 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.919340 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:06Z","lastTransitionTime":"2025-12-05T01:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.929398 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:09:06 crc kubenswrapper[4990]: I1205 01:09:06.929511 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:09:06 crc kubenswrapper[4990]: E1205 01:09:06.929612 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:09:06 crc kubenswrapper[4990]: E1205 01:09:06.929784 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.023433 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.023542 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.023563 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.023596 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.023618 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:07Z","lastTransitionTime":"2025-12-05T01:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.126758 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.126870 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.126891 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.126924 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.126948 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:07Z","lastTransitionTime":"2025-12-05T01:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.229393 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.229509 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.229531 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.229567 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.229592 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:07Z","lastTransitionTime":"2025-12-05T01:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.333716 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.333799 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.333824 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.333864 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.333892 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:07Z","lastTransitionTime":"2025-12-05T01:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.437865 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.437942 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.437965 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.437993 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.438012 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:07Z","lastTransitionTime":"2025-12-05T01:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.541707 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.541780 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.541808 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.541845 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.541874 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:07Z","lastTransitionTime":"2025-12-05T01:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.645762 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.645846 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.645876 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.645910 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.645935 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:07Z","lastTransitionTime":"2025-12-05T01:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.749506 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.749564 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.749576 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.749597 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.749611 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:07Z","lastTransitionTime":"2025-12-05T01:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.853366 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.853444 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.853458 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.853508 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.853526 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:07Z","lastTransitionTime":"2025-12-05T01:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.930345 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.930435 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:09:07 crc kubenswrapper[4990]: E1205 01:09:07.930693 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:09:07 crc kubenswrapper[4990]: E1205 01:09:07.930818 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.957409 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.957526 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.957546 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.957573 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:07 crc kubenswrapper[4990]: I1205 01:09:07.957591 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:07Z","lastTransitionTime":"2025-12-05T01:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.062580 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.062755 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.062784 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.063374 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.063573 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:08Z","lastTransitionTime":"2025-12-05T01:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.168105 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.168176 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.168189 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.168212 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.168225 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:08Z","lastTransitionTime":"2025-12-05T01:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.271972 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.272053 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.272076 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.272108 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.272130 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:08Z","lastTransitionTime":"2025-12-05T01:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.375449 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.375582 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.375597 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.375619 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.375634 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:08Z","lastTransitionTime":"2025-12-05T01:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.479525 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.479667 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.479689 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.479717 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.479735 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:08Z","lastTransitionTime":"2025-12-05T01:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.582601 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.582689 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.582715 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.582752 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.582779 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:08Z","lastTransitionTime":"2025-12-05T01:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.686554 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.686609 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.686626 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.686655 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.686678 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:08Z","lastTransitionTime":"2025-12-05T01:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.789808 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.789901 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.789927 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.789967 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.789996 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:08Z","lastTransitionTime":"2025-12-05T01:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.892666 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.892724 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.892741 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.892766 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.892784 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:08Z","lastTransitionTime":"2025-12-05T01:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.929607 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.929662 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:09:08 crc kubenswrapper[4990]: E1205 01:09:08.929781 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:09:08 crc kubenswrapper[4990]: E1205 01:09:08.929939 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.996101 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.996176 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.996187 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.996221 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:08 crc kubenswrapper[4990]: I1205 01:09:08.996233 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:08Z","lastTransitionTime":"2025-12-05T01:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.100043 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.100112 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.100128 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.100149 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.100165 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:09Z","lastTransitionTime":"2025-12-05T01:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.204144 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.204220 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.204238 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.204272 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.204295 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:09Z","lastTransitionTime":"2025-12-05T01:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.308016 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.308105 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.308127 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.308157 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.308179 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:09Z","lastTransitionTime":"2025-12-05T01:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.411781 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.411902 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.411930 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.411973 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.411999 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:09Z","lastTransitionTime":"2025-12-05T01:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.516036 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.516116 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.516138 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.516172 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.516196 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:09Z","lastTransitionTime":"2025-12-05T01:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.619802 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.619863 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.619883 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.619940 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.619959 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:09Z","lastTransitionTime":"2025-12-05T01:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.723428 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.723513 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.723530 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.723551 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.723568 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:09Z","lastTransitionTime":"2025-12-05T01:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.827160 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.827239 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.827257 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.827290 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.827313 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:09Z","lastTransitionTime":"2025-12-05T01:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.929746 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.929872 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:09:09 crc kubenswrapper[4990]: E1205 01:09:09.930053 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:09:09 crc kubenswrapper[4990]: E1205 01:09:09.930336 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.930872 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.930926 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.930936 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.930958 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:09 crc kubenswrapper[4990]: I1205 01:09:09.930973 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:09Z","lastTransitionTime":"2025-12-05T01:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.034049 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.034129 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.034152 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.034186 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.034211 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:10Z","lastTransitionTime":"2025-12-05T01:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.137372 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.137467 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.137506 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.137528 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.137541 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:10Z","lastTransitionTime":"2025-12-05T01:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.241009 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.241055 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.241066 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.241085 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.241096 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:10Z","lastTransitionTime":"2025-12-05T01:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.343867 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.343910 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.343922 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.343940 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.343955 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:10Z","lastTransitionTime":"2025-12-05T01:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.447046 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.447090 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.447105 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.447126 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.447142 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:10Z","lastTransitionTime":"2025-12-05T01:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.549746 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.549787 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.549798 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.549812 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.549822 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:10Z","lastTransitionTime":"2025-12-05T01:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.653525 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.653621 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.653642 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.653676 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.653696 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:10Z","lastTransitionTime":"2025-12-05T01:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.757050 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.757140 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.757158 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.757183 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.757200 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:10Z","lastTransitionTime":"2025-12-05T01:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.860639 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.860840 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.860885 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.860922 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.860946 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:10Z","lastTransitionTime":"2025-12-05T01:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.929789 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.929819 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:09:10 crc kubenswrapper[4990]: E1205 01:09:10.930097 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:09:10 crc kubenswrapper[4990]: E1205 01:09:10.930221 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.965051 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.965157 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.965174 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.965202 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:10 crc kubenswrapper[4990]: I1205 01:09:10.965221 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:10Z","lastTransitionTime":"2025-12-05T01:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.068609 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.068729 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.068753 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.068787 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.068812 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:11Z","lastTransitionTime":"2025-12-05T01:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.172406 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.172534 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.172555 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.172581 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.172599 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:11Z","lastTransitionTime":"2025-12-05T01:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.276304 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.276379 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.276624 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.276658 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.276683 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:11Z","lastTransitionTime":"2025-12-05T01:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.380889 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.380949 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.380967 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.380999 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.381020 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:11Z","lastTransitionTime":"2025-12-05T01:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.484407 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.484517 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.484546 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.484581 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.484603 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:11Z","lastTransitionTime":"2025-12-05T01:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.587807 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.587870 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.587893 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.587922 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.587943 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:11Z","lastTransitionTime":"2025-12-05T01:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.691465 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.691552 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.691568 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.691596 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.691614 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:11Z","lastTransitionTime":"2025-12-05T01:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.794761 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.794821 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.794844 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.794874 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.794895 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:11Z","lastTransitionTime":"2025-12-05T01:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.898522 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.898665 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.898692 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.898724 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.898741 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:11Z","lastTransitionTime":"2025-12-05T01:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.930353 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.931141 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:09:11 crc kubenswrapper[4990]: E1205 01:09:11.930655 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:09:11 crc kubenswrapper[4990]: E1205 01:09:11.931441 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.931907 4990 scope.go:117] "RemoveContainer" containerID="71fe8f31557822a5c2424ffae17ccc7664d6379d1099661bdbfacb87d9db7f27" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.953219 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:11Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:11 crc kubenswrapper[4990]: I1205 01:09:11.979116 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531a3fe35c53fc02d57c85ec09e66ca43962c444bf7a59abb676020240ed91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:11Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.001374 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8617140c-972f-4ec0-b814-350305fff19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd7abb19014a2e2eb714140cf0544b6dda5e8e729c0762950b39733a53a8981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5313c74541ba3388246397e45fa492ca200b8897142d8a648b0c34c7c576a559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pss5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:11Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.002428 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.002512 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.002540 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.002575 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.002599 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:12Z","lastTransitionTime":"2025-12-05T01:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.044191 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe8f31557822a5c2424ffae17ccc7664d6379d1099661bdbfacb87d9db7f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe8f31557822a5c2424ffae17ccc7664d6379d1099661bdbfacb87d9db7f27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"message\\\":\\\"ervices.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"192.168.126.11\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string(nil), Groups:[]string(nil)}}\\\\nI1205 01:08:55.868261 6385 services_controller.go:360] Finished syncing service kube-controller-manager on namespace openshift-kube-controller-manager for network=default : 1.064481ms\\\\nI1205 01:08:55.868257 6385 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver-operator/metrics]} name:Service_openshift-kube-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.109:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1f62a432-33b9-495d-83b2-d1dbe6961325}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1205 01:08:55.868302 6385 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-controllers for network=default\\\\nF1205 01:08:55.868305 6385 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-4w6g9_openshift-ovn-kubernetes(3eeec70d-1c5c-434e-90bc-95620458151c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:12Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.064764 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vlg2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccea29b1-256e-417e-985e-ca477e0b8d7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5535c15a27a700aeea04a7cd4a8bec6709fe1de66fc4dc8e9edb9d7d00900ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vlg2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:12Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.083096 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bxb6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7760172e-33aa-4de9-bd10-6a92c0851c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqpsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqpsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bxb6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:12Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.100259 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a14f02-7a4f-422f-a8c6-c611162c1bd3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407f3b7963c007e3baf021afe67f7c9836422245e3a9e89a2277ec1d98ff27d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1a86cd848696e6b8d7296270d5d8b58ffca056fe95db283a4f494c87684ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5844498d95e908f579a1ffcd6d0ba838c470c12cad2dfd89e8d4df5f7931cfbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6afe31db8331e521d8c92b070693d2997a7a26483bf351586b38ebb5869b53b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:12Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.105844 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.105898 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.105916 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.105946 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.105964 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:12Z","lastTransitionTime":"2025-12-05T01:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.116041 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7760172e-33aa-4de9-bd10-6a92c0851c6e-metrics-certs\") pod \"network-metrics-daemon-bxb6s\" (UID: \"7760172e-33aa-4de9-bd10-6a92c0851c6e\") " pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:09:12 crc kubenswrapper[4990]: E1205 01:09:12.116272 4990 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 01:09:12 crc kubenswrapper[4990]: E1205 01:09:12.116353 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7760172e-33aa-4de9-bd10-6a92c0851c6e-metrics-certs podName:7760172e-33aa-4de9-bd10-6a92c0851c6e nodeName:}" failed. No retries permitted until 2025-12-05 01:09:28.116330584 +0000 UTC m=+66.492545945 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7760172e-33aa-4de9-bd10-6a92c0851c6e-metrics-certs") pod "network-metrics-daemon-bxb6s" (UID: "7760172e-33aa-4de9-bd10-6a92c0851c6e") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.130952 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:12Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.161939 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:12Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.182994 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:12Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.208376 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.208414 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.208428 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.208449 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.208460 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:12Z","lastTransitionTime":"2025-12-05T01:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.211233 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:12Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.225592 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:12Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.241882 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:12Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.253978 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:12Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.267045 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:12Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.280543 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:12Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.296110 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4w6g9_3eeec70d-1c5c-434e-90bc-95620458151c/ovnkube-controller/1.log" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.299802 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" event={"ID":"3eeec70d-1c5c-434e-90bc-95620458151c","Type":"ContainerStarted","Data":"eb074f137b0178e371feb06c41b7d35c06495f909d11e2388ff1528b3933b11f"} Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.300309 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.311504 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.311545 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.311556 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.311577 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.311593 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:12Z","lastTransitionTime":"2025-12-05T01:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.315942 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:12Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.335115 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb074f137b0178e371feb06c41b7d35c06495f909d11e2388ff1528b3933b11f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe8f31557822a5c2424ffae17ccc7664d6379d1099661bdbfacb87d9db7f27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"message\\\":\\\"ervices.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"192.168.126.11\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string(nil), Groups:[]string(nil)}}\\\\nI1205 01:08:55.868261 6385 services_controller.go:360] Finished syncing service kube-controller-manager on namespace openshift-kube-controller-manager for network=default : 1.064481ms\\\\nI1205 01:08:55.868257 6385 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver-operator/metrics]} name:Service_openshift-kube-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.109:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1f62a432-33b9-495d-83b2-d1dbe6961325}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1205 01:08:55.868302 6385 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-controllers for network=default\\\\nF1205 01:08:55.868305 6385 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:12Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.347042 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vlg2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccea29b1-256e-417e-985e-ca477e0b8d7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5535c15a27a700aeea04a7cd4a8bec6709fe1de66fc4dc8e9edb9d7d00900ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vlg2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:12Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.359575 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bxb6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7760172e-33aa-4de9-bd10-6a92c0851c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqpsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqpsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bxb6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:12Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.375946 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a14f02-7a4f-422f-a8c6-c611162c1bd3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407f3b7963c007e3baf021afe67f7c9836422245e3a9e89a2277ec1d98ff27d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1a86cd848696e6b8d7296270d5d8b58ffca056fe95db283a4f494c87684ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5844498d95e908f579a1ffcd6d0ba838c470c12cad2dfd89e8d4df5f7931cfbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6afe31db8331e521d8c92b070693d2997a7a26483bf351586b38ebb5869b53b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:12Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.394669 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:12Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.409208 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:12Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.413893 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.413995 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.414096 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.414238 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.414346 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:12Z","lastTransitionTime":"2025-12-05T01:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.425439 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:12Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.445904 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:12Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.464245 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:12Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.484017 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:12Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.501504 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:12Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.521217 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.521277 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.521291 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.521315 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.521333 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:12Z","lastTransitionTime":"2025-12-05T01:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.527374 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:12Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.546769 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:12Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.562381 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531a3fe35c53fc02d57c85ec09e66ca43962c444bf7a59abb676020240ed91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:12Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.575633 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8617140c-972f-4ec0-b814-350305fff19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd7abb19014a2e2eb714140cf0544b6dda5e8e729c0762950b39733a53a8981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5313c74541ba3388246397e45fa492ca200b8897142d8a648b0c34c7c576a559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pss5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:12Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.623384 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.623663 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.623740 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.623843 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.623949 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:12Z","lastTransitionTime":"2025-12-05T01:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.726278 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.726319 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.726327 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.726343 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.726354 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:12Z","lastTransitionTime":"2025-12-05T01:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.829436 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.829546 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.829583 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.829592 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:12 crc kubenswrapper[4990]: E1205 01:09:12.829635 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:09:44.829589831 +0000 UTC m=+83.205805192 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.829773 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.829822 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.829837 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:12Z","lastTransitionTime":"2025-12-05T01:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.829868 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:09:12 crc kubenswrapper[4990]: E1205 01:09:12.829970 4990 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 01:09:12 crc kubenswrapper[4990]: E1205 01:09:12.830031 4990 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 01:09:12 crc kubenswrapper[4990]: E1205 01:09:12.830080 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 01:09:44.830050874 +0000 UTC m=+83.206266235 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 01:09:12 crc kubenswrapper[4990]: E1205 01:09:12.830109 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 01:09:44.830096265 +0000 UTC m=+83.206311866 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.929412 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:09:12 crc kubenswrapper[4990]: E1205 01:09:12.929603 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.929628 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:09:12 crc kubenswrapper[4990]: E1205 01:09:12.929946 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.930501 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.930542 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:09:12 crc kubenswrapper[4990]: E1205 01:09:12.930670 4990 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 01:09:12 crc kubenswrapper[4990]: E1205 01:09:12.930689 4990 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 01:09:12 crc kubenswrapper[4990]: E1205 01:09:12.930699 4990 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 01:09:12 crc kubenswrapper[4990]: E1205 01:09:12.930712 4990 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 01:09:12 crc kubenswrapper[4990]: E1205 01:09:12.930716 4990 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 01:09:12 crc kubenswrapper[4990]: E1205 01:09:12.930724 4990 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 01:09:12 crc kubenswrapper[4990]: E1205 01:09:12.930773 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 01:09:44.930754577 +0000 UTC m=+83.306969938 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 01:09:12 crc kubenswrapper[4990]: E1205 01:09:12.930792 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 01:09:44.930784038 +0000 UTC m=+83.306999399 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.931854 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.931886 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.931900 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.931920 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:12 crc kubenswrapper[4990]: I1205 01:09:12.931933 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:12Z","lastTransitionTime":"2025-12-05T01:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.033874 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.033947 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.033961 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.033979 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.033995 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:13Z","lastTransitionTime":"2025-12-05T01:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.136831 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.136875 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.136884 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.136900 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.136913 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:13Z","lastTransitionTime":"2025-12-05T01:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.239584 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.239669 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.239696 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.239728 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.239752 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:13Z","lastTransitionTime":"2025-12-05T01:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.307093 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4w6g9_3eeec70d-1c5c-434e-90bc-95620458151c/ovnkube-controller/2.log" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.307910 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4w6g9_3eeec70d-1c5c-434e-90bc-95620458151c/ovnkube-controller/1.log" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.311021 4990 generic.go:334] "Generic (PLEG): container finished" podID="3eeec70d-1c5c-434e-90bc-95620458151c" containerID="eb074f137b0178e371feb06c41b7d35c06495f909d11e2388ff1528b3933b11f" exitCode=1 Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.311082 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" event={"ID":"3eeec70d-1c5c-434e-90bc-95620458151c","Type":"ContainerDied","Data":"eb074f137b0178e371feb06c41b7d35c06495f909d11e2388ff1528b3933b11f"} Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.311148 4990 scope.go:117] "RemoveContainer" containerID="71fe8f31557822a5c2424ffae17ccc7664d6379d1099661bdbfacb87d9db7f27" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.312258 4990 scope.go:117] "RemoveContainer" containerID="eb074f137b0178e371feb06c41b7d35c06495f909d11e2388ff1528b3933b11f" Dec 05 01:09:13 crc kubenswrapper[4990]: E1205 01:09:13.312717 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4w6g9_openshift-ovn-kubernetes(3eeec70d-1c5c-434e-90bc-95620458151c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.334473 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8617140c-972f-4ec0-b814-350305fff19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd7abb19014a2e2eb714140cf0544b6dda5e8e729c0762950b39733a53a8981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5313c74541ba3388246397e45fa492ca200b8897142d8a648b0c34c7c576a559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pss5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:13Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.342344 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.342377 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.342388 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.342404 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.342414 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:13Z","lastTransitionTime":"2025-12-05T01:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.358018 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:13Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.380030 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531a3fe35c53fc02d57c85ec09e66ca43962c444bf7a59abb676020240ed91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:13Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.397279 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bxb6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7760172e-33aa-4de9-bd10-6a92c0851c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqpsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqpsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bxb6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:13Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.413864 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a14f02-7a4f-422f-a8c6-c611162c1bd3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407f3b7963c007e3baf021afe67f7c9836422245e3a9e89a2277ec1d98ff27d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1a86cd848696e6b8d7296270d5d8b58ffca056fe95db283a4f494c87684ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5844498d95e908f579a1ffcd6d0ba838c470c12cad2dfd89e8d4df5f7931cfbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6afe31db8331e521d8c92b070693d2997a7a26483bf351586b38ebb5869b53b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:13Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.428601 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:13Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.445509 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.445559 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.445575 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.445598 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.445610 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:13Z","lastTransitionTime":"2025-12-05T01:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.447151 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb074f137b0178e371feb06c41b7d35c06495f909d11e2388ff1528b3933b11f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe8f31557822a5c2424ffae17ccc7664d6379d1099661bdbfacb87d9db7f27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"message\\\":\\\"ervices.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"192.168.126.11\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string(nil), Groups:[]string(nil)}}\\\\nI1205 01:08:55.868261 6385 services_controller.go:360] Finished syncing service kube-controller-manager on namespace openshift-kube-controller-manager for network=default : 1.064481ms\\\\nI1205 01:08:55.868257 6385 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver-operator/metrics]} name:Service_openshift-kube-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.109:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1f62a432-33b9-495d-83b2-d1dbe6961325}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1205 01:08:55.868302 6385 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-controllers for network=default\\\\nF1205 01:08:55.868305 6385 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb074f137b0178e371feb06c41b7d35c06495f909d11e2388ff1528b3933b11f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T01:09:12Z\\\",\\\"message\\\":\\\"1:09:12.948727 6579 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 01:09:12.949116 6579 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 01:09:12.949284 6579 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 01:09:12.949455 6579 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1205 01:09:12.949992 6579 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 01:09:12.950108 6579 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 01:09:12.950184 6579 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1205 01:09:12.950190 6579 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1205 01:09:12.950213 6579 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1205 01:09:12.950272 6579 handler.go:208] Removed *v1.Node event handler 7\\\\nI1205 01:09:12.950282 6579 factory.go:656] Stopping watch factory\\\\nI1205 01:09:12.950300 6579 handler.go:208] Removed *v1.Node event handler 2\\\\nI1205 01:09:12.950310 6579 ovnkube.go:599] Stopped ovnkube\\\\nI1205 01:09:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:13Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.459099 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vlg2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccea29b1-256e-417e-985e-ca477e0b8d7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5535c15a27a700aeea04a7cd4a8bec6709fe1de66fc4dc8e9edb9d7d00900ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vlg2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:13Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.473523 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:13Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.489247 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:13Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.500170 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:13Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.510694 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:13Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.526281 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:13Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.540410 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:13Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.547582 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.547620 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.547633 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.547650 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.547661 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:13Z","lastTransitionTime":"2025-12-05T01:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.553941 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:13Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.569410 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:13Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.651346 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.651469 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.651529 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.651568 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.651611 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:13Z","lastTransitionTime":"2025-12-05T01:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.755199 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.755286 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.755306 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.755336 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.755360 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:13Z","lastTransitionTime":"2025-12-05T01:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.859540 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.859602 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.859613 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.859632 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.859643 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:13Z","lastTransitionTime":"2025-12-05T01:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.929534 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.929549 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:09:13 crc kubenswrapper[4990]: E1205 01:09:13.929848 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:09:13 crc kubenswrapper[4990]: E1205 01:09:13.930009 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.963280 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.963341 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.963353 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.963373 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:13 crc kubenswrapper[4990]: I1205 01:09:13.963386 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:13Z","lastTransitionTime":"2025-12-05T01:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.066860 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.066919 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.066931 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.066950 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.066962 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:14Z","lastTransitionTime":"2025-12-05T01:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.169738 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.169800 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.169815 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.169841 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.169858 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:14Z","lastTransitionTime":"2025-12-05T01:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.268812 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.273079 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.273144 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.273159 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.273182 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.273196 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:14Z","lastTransitionTime":"2025-12-05T01:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.282990 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.287542 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:14Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.306313 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531a3fe35c53fc02d57c85ec09e66ca43962c444bf7a59abb676020240ed91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:14Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.316773 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4w6g9_3eeec70d-1c5c-434e-90bc-95620458151c/ovnkube-controller/2.log" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.321367 4990 scope.go:117] "RemoveContainer" containerID="eb074f137b0178e371feb06c41b7d35c06495f909d11e2388ff1528b3933b11f" Dec 05 01:09:14 crc kubenswrapper[4990]: E1205 01:09:14.321585 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4w6g9_openshift-ovn-kubernetes(3eeec70d-1c5c-434e-90bc-95620458151c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.323374 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8617140c-972f-4ec0-b814-350305fff19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd7abb19014a2e2eb714140cf0544b6dda5e8e729c0762950b39733a53a8981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5313c74541ba3388246397e45fa492ca200b8897142d8a648b0c34c7c576a559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pss5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:14Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.343053 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a14f02-7a4f-422f-a8c6-c611162c1bd3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407f3b7963c007e3baf021afe67f7c9836422245e3a9e89a2277ec1d98ff27d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1a86cd848696e6b8d7296270d5d8b58ffca056fe95db283a4f494c87684ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5844498d95e908f579a1ffcd6d0ba838c470c12cad2dfd89e8d4df5f7931cfbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6afe31db8331e521d8c92b070693d2997a7a26483bf351586b38ebb5869b53b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:14Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.360119 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:14Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.376282 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.376355 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.376379 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.376408 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.376430 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:14Z","lastTransitionTime":"2025-12-05T01:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.385906 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb074f137b0178e371feb06c41b7d35c06495f909d11e2388ff1528b3933b11f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe8f31557822a5c2424ffae17ccc7664d6379d1099661bdbfacb87d9db7f27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T01:08:55Z\\\",\\\"message\\\":\\\"ervices.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"192.168.126.11\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string(nil), Groups:[]string(nil)}}\\\\nI1205 01:08:55.868261 6385 services_controller.go:360] Finished syncing service kube-controller-manager on namespace openshift-kube-controller-manager for network=default : 1.064481ms\\\\nI1205 01:08:55.868257 6385 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver-operator/metrics]} name:Service_openshift-kube-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.109:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1f62a432-33b9-495d-83b2-d1dbe6961325}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1205 01:08:55.868302 6385 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-controllers for network=default\\\\nF1205 01:08:55.868305 6385 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb074f137b0178e371feb06c41b7d35c06495f909d11e2388ff1528b3933b11f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T01:09:12Z\\\",\\\"message\\\":\\\"1:09:12.948727 6579 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 01:09:12.949116 6579 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 01:09:12.949284 6579 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 01:09:12.949455 6579 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1205 01:09:12.949992 6579 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 01:09:12.950108 6579 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 01:09:12.950184 6579 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1205 01:09:12.950190 6579 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1205 01:09:12.950213 6579 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1205 01:09:12.950272 6579 handler.go:208] Removed *v1.Node event handler 7\\\\nI1205 01:09:12.950282 6579 factory.go:656] Stopping watch factory\\\\nI1205 01:09:12.950300 6579 handler.go:208] Removed *v1.Node event handler 2\\\\nI1205 01:09:12.950310 6579 ovnkube.go:599] Stopped ovnkube\\\\nI1205 01:09:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:14Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.400646 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vlg2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccea29b1-256e-417e-985e-ca477e0b8d7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5535c15a27a700aeea04a7cd4a8bec6709fe1de66fc4dc8e9edb9d7d00900ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vlg2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:14Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.416694 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bxb6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7760172e-33aa-4de9-bd10-6a92c0851c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqpsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqpsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bxb6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:14Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.436454 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:14Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.456127 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:14Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.472392 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:14Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.480232 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.480292 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.480311 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.480339 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.480358 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:14Z","lastTransitionTime":"2025-12-05T01:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.492431 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:14Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.508461 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:14Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.526818 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:14Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.546368 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:14Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.569908 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:14Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.583447 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.583553 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.583579 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.583610 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.583633 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:14Z","lastTransitionTime":"2025-12-05T01:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.604853 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a14f02-7a4f-422f-a8c6-c611162c1bd3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407f3b7963c007e3baf021afe67f7c9836422245e3a9e89a2277ec1d98ff27d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1a86cd848696e6b8d7296270d5d8b58ffca056fe95db283a4f494c87684ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5844498d95e908f579a1ffcd6d0ba838c470c12cad2dfd89e8d4df5f7931cfbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6afe31db8331e521d8c92b070693d2997a7a26483bf351586b38ebb5869b53b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:14Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.628029 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:14Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.660728 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb074f137b0178e371feb06c41b7d35c06495f909d11e2388ff1528b3933b11f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb074f137b0178e371feb06c41b7d35c06495f909d11e2388ff1528b3933b11f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T01:09:12Z\\\",\\\"message\\\":\\\"1:09:12.948727 6579 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 01:09:12.949116 6579 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 01:09:12.949284 6579 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 01:09:12.949455 6579 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1205 01:09:12.949992 6579 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 01:09:12.950108 6579 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 01:09:12.950184 6579 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1205 01:09:12.950190 6579 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1205 01:09:12.950213 6579 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1205 01:09:12.950272 6579 handler.go:208] Removed *v1.Node event handler 7\\\\nI1205 01:09:12.950282 6579 factory.go:656] Stopping watch factory\\\\nI1205 01:09:12.950300 6579 handler.go:208] Removed *v1.Node event handler 2\\\\nI1205 01:09:12.950310 6579 ovnkube.go:599] Stopped ovnkube\\\\nI1205 01:09:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:09:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4w6g9_openshift-ovn-kubernetes(3eeec70d-1c5c-434e-90bc-95620458151c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:14Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.671939 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vlg2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccea29b1-256e-417e-985e-ca477e0b8d7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5535c15a27a700aeea04a7cd4a8bec6709fe1de66fc4dc8e9edb9d7d00900ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vlg2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:14Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.681597 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bxb6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7760172e-33aa-4de9-bd10-6a92c0851c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqpsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqpsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bxb6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:14Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.686140 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.686176 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.686187 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.686206 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.686219 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:14Z","lastTransitionTime":"2025-12-05T01:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.691440 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae199f42-bfab-4367-aadd-54f3ab99b342\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd1e00c990d5f61ca755a13e8fb3a9e841975edc5dea3e2a51f715d2556c1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f954071f194ae52b5b005d748ce92ac2507ac58868aa9fadcf9afcf9b9d8f71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5abd03392e388089cf716a7ea2eea41895e742cd173a3b217bbbd555e62c237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eac40a101194d00ec5ee1c7595a4d0baecbf61dda5ae671e03df521f2397a22c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eac40a101194d00ec5ee1c7595a4d0baecbf61dda5ae671e03df521f2397a22c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:14Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.701903 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:14Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.721377 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:14Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.733570 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:14Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.754802 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:14Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.772698 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:14Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.787007 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:14Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.789078 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.789136 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.789151 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.789176 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.789193 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:14Z","lastTransitionTime":"2025-12-05T01:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.801118 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:14Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.811760 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:14Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.823834 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:14Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.839133 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531a3fe35c53fc02d57c85ec09e66ca43962c444bf7a59abb676020240ed91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:14Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.852701 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8617140c-972f-4ec0-b814-350305fff19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd7abb19014a2e2eb714140cf0544b6dda5e8e729c0762950b39733a53a8981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5313c74541ba3388246397e45fa492ca200b8897142d8a648b0c34c7c576a559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pss5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:14Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.892418 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.892534 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.892587 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.892612 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.892697 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:14Z","lastTransitionTime":"2025-12-05T01:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.929964 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:09:14 crc kubenswrapper[4990]: I1205 01:09:14.930065 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:09:14 crc kubenswrapper[4990]: E1205 01:09:14.930140 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:09:14 crc kubenswrapper[4990]: E1205 01:09:14.930285 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:14.996277 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:14.996336 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:14.996354 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:14.996381 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:14.996403 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:14Z","lastTransitionTime":"2025-12-05T01:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.099590 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.099649 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.099663 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.099682 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.099694 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:15Z","lastTransitionTime":"2025-12-05T01:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.203783 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.203840 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.203858 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.203885 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.203908 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:15Z","lastTransitionTime":"2025-12-05T01:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.307805 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.308125 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.308369 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.308524 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.308821 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:15Z","lastTransitionTime":"2025-12-05T01:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.412543 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.413072 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.413228 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.413390 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.413577 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:15Z","lastTransitionTime":"2025-12-05T01:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.517407 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.517907 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.518125 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.518279 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.518413 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:15Z","lastTransitionTime":"2025-12-05T01:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.621668 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.622154 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.622300 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.622450 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.622624 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:15Z","lastTransitionTime":"2025-12-05T01:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.726115 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.726210 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.726231 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.726261 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.726281 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:15Z","lastTransitionTime":"2025-12-05T01:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.729025 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.729055 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.729066 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.729079 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.729089 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:15Z","lastTransitionTime":"2025-12-05T01:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:15 crc kubenswrapper[4990]: E1205 01:09:15.752261 4990 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2415bd45-5145-44bb-b5a4-8197e19c19f6\\\",\\\"systemUUID\\\":\\\"ce964c17-1cf3-4471-84ac-c2fc1079c2f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:15Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.759822 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.759912 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.759939 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.759996 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.760024 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:15Z","lastTransitionTime":"2025-12-05T01:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:15 crc kubenswrapper[4990]: E1205 01:09:15.778323 4990 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2415bd45-5145-44bb-b5a4-8197e19c19f6\\\",\\\"systemUUID\\\":\\\"ce964c17-1cf3-4471-84ac-c2fc1079c2f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:15Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.784823 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.784875 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.784890 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.784914 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.784929 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:15Z","lastTransitionTime":"2025-12-05T01:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:15 crc kubenswrapper[4990]: E1205 01:09:15.808744 4990 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2415bd45-5145-44bb-b5a4-8197e19c19f6\\\",\\\"systemUUID\\\":\\\"ce964c17-1cf3-4471-84ac-c2fc1079c2f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:15Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.814884 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.815236 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.815373 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.815545 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.815681 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:15Z","lastTransitionTime":"2025-12-05T01:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:15 crc kubenswrapper[4990]: E1205 01:09:15.833137 4990 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2415bd45-5145-44bb-b5a4-8197e19c19f6\\\",\\\"systemUUID\\\":\\\"ce964c17-1cf3-4471-84ac-c2fc1079c2f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:15Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.838420 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.838498 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.838514 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.838538 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.838557 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:15Z","lastTransitionTime":"2025-12-05T01:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:15 crc kubenswrapper[4990]: E1205 01:09:15.861701 4990 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2415bd45-5145-44bb-b5a4-8197e19c19f6\\\",\\\"systemUUID\\\":\\\"ce964c17-1cf3-4471-84ac-c2fc1079c2f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:15Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:15 crc kubenswrapper[4990]: E1205 01:09:15.861869 4990 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.865227 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.865294 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.865308 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.865330 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.865344 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:15Z","lastTransitionTime":"2025-12-05T01:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.930179 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.930180 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:09:15 crc kubenswrapper[4990]: E1205 01:09:15.930459 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:09:15 crc kubenswrapper[4990]: E1205 01:09:15.930558 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.968752 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.968808 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.968833 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.968860 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:15 crc kubenswrapper[4990]: I1205 01:09:15.968877 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:15Z","lastTransitionTime":"2025-12-05T01:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:16 crc kubenswrapper[4990]: I1205 01:09:16.072301 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:16 crc kubenswrapper[4990]: I1205 01:09:16.072357 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:16 crc kubenswrapper[4990]: I1205 01:09:16.072371 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:16 crc kubenswrapper[4990]: I1205 01:09:16.072395 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:16 crc kubenswrapper[4990]: I1205 01:09:16.072412 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:16Z","lastTransitionTime":"2025-12-05T01:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:16 crc kubenswrapper[4990]: I1205 01:09:16.175834 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:16 crc kubenswrapper[4990]: I1205 01:09:16.175892 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:16 crc kubenswrapper[4990]: I1205 01:09:16.175912 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:16 crc kubenswrapper[4990]: I1205 01:09:16.175940 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:16 crc kubenswrapper[4990]: I1205 01:09:16.175959 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:16Z","lastTransitionTime":"2025-12-05T01:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:16 crc kubenswrapper[4990]: I1205 01:09:16.278799 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:16 crc kubenswrapper[4990]: I1205 01:09:16.278868 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:16 crc kubenswrapper[4990]: I1205 01:09:16.278888 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:16 crc kubenswrapper[4990]: I1205 01:09:16.278917 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:16 crc kubenswrapper[4990]: I1205 01:09:16.278938 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:16Z","lastTransitionTime":"2025-12-05T01:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:16 crc kubenswrapper[4990]: I1205 01:09:16.382225 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:16 crc kubenswrapper[4990]: I1205 01:09:16.382312 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:16 crc kubenswrapper[4990]: I1205 01:09:16.382334 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:16 crc kubenswrapper[4990]: I1205 01:09:16.382363 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:16 crc kubenswrapper[4990]: I1205 01:09:16.382384 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:16Z","lastTransitionTime":"2025-12-05T01:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:16 crc kubenswrapper[4990]: I1205 01:09:16.485887 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:16 crc kubenswrapper[4990]: I1205 01:09:16.485987 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:16 crc kubenswrapper[4990]: I1205 01:09:16.486005 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:16 crc kubenswrapper[4990]: I1205 01:09:16.486033 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:16 crc kubenswrapper[4990]: I1205 01:09:16.486052 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:16Z","lastTransitionTime":"2025-12-05T01:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:16 crc kubenswrapper[4990]: I1205 01:09:16.589124 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:16 crc kubenswrapper[4990]: I1205 01:09:16.589188 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:16 crc kubenswrapper[4990]: I1205 01:09:16.589206 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:16 crc kubenswrapper[4990]: I1205 01:09:16.589235 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:16 crc kubenswrapper[4990]: I1205 01:09:16.589259 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:16Z","lastTransitionTime":"2025-12-05T01:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.008868 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.009143 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.009250 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:09:17 crc kubenswrapper[4990]: E1205 01:09:17.009327 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:09:17 crc kubenswrapper[4990]: E1205 01:09:17.009418 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:09:17 crc kubenswrapper[4990]: E1205 01:09:17.009660 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.013543 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.013607 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.013622 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.013649 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.013666 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:17Z","lastTransitionTime":"2025-12-05T01:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.116569 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.116610 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.116620 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.116637 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.116647 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:17Z","lastTransitionTime":"2025-12-05T01:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.219304 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.219351 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.219363 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.219382 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.219394 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:17Z","lastTransitionTime":"2025-12-05T01:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.322672 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.322726 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.322738 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.322773 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.322786 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:17Z","lastTransitionTime":"2025-12-05T01:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.425741 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.425775 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.425785 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.425802 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.425813 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:17Z","lastTransitionTime":"2025-12-05T01:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.528981 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.529030 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.529041 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.529061 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.529074 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:17Z","lastTransitionTime":"2025-12-05T01:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.633003 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.633054 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.633066 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.633091 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.633106 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:17Z","lastTransitionTime":"2025-12-05T01:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.737197 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.737243 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.737256 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.737274 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.737286 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:17Z","lastTransitionTime":"2025-12-05T01:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.840702 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.840763 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.840782 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.840810 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.840827 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:17Z","lastTransitionTime":"2025-12-05T01:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.930435 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:09:17 crc kubenswrapper[4990]: E1205 01:09:17.930706 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.944443 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.944803 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.945090 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.945287 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:17 crc kubenswrapper[4990]: I1205 01:09:17.945466 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:17Z","lastTransitionTime":"2025-12-05T01:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.049968 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.050050 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.050070 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.050104 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.050126 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:18Z","lastTransitionTime":"2025-12-05T01:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.153962 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.154033 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.154061 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.154093 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.154118 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:18Z","lastTransitionTime":"2025-12-05T01:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.258002 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.258083 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.258108 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.258141 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.258165 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:18Z","lastTransitionTime":"2025-12-05T01:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.361188 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.361248 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.361265 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.361293 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.361309 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:18Z","lastTransitionTime":"2025-12-05T01:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.464902 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.465000 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.465026 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.465065 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.465092 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:18Z","lastTransitionTime":"2025-12-05T01:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.568184 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.568238 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.568252 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.568272 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.568284 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:18Z","lastTransitionTime":"2025-12-05T01:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.671170 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.671243 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.671255 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.671272 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.671282 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:18Z","lastTransitionTime":"2025-12-05T01:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.774827 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.775143 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.775163 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.775185 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.775201 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:18Z","lastTransitionTime":"2025-12-05T01:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.878220 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.878282 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.878301 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.878324 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.878338 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:18Z","lastTransitionTime":"2025-12-05T01:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.929726 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.929832 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.929734 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:09:18 crc kubenswrapper[4990]: E1205 01:09:18.929893 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:09:18 crc kubenswrapper[4990]: E1205 01:09:18.929984 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:09:18 crc kubenswrapper[4990]: E1205 01:09:18.930100 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.981390 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.981439 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.981451 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.981469 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:18 crc kubenswrapper[4990]: I1205 01:09:18.981508 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:18Z","lastTransitionTime":"2025-12-05T01:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.084975 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.085055 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.085066 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.085109 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.085122 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:19Z","lastTransitionTime":"2025-12-05T01:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.188894 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.188973 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.188999 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.189032 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.189060 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:19Z","lastTransitionTime":"2025-12-05T01:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.292360 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.292462 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.292531 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.292568 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.292592 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:19Z","lastTransitionTime":"2025-12-05T01:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.395427 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.395560 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.395595 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.395625 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.395643 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:19Z","lastTransitionTime":"2025-12-05T01:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.498575 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.498618 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.498631 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.498648 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.498665 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:19Z","lastTransitionTime":"2025-12-05T01:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.602053 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.602178 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.602197 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.602221 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.602239 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:19Z","lastTransitionTime":"2025-12-05T01:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.707261 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.707330 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.707348 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.707373 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.707402 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:19Z","lastTransitionTime":"2025-12-05T01:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.810733 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.810797 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.810812 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.810832 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.810846 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:19Z","lastTransitionTime":"2025-12-05T01:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.913460 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.913538 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.913559 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.913582 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.913597 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:19Z","lastTransitionTime":"2025-12-05T01:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:19 crc kubenswrapper[4990]: I1205 01:09:19.930685 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:09:19 crc kubenswrapper[4990]: E1205 01:09:19.930943 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.017439 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.017534 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.017552 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.017580 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.017597 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:20Z","lastTransitionTime":"2025-12-05T01:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.120324 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.120429 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.120535 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.120584 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.120602 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:20Z","lastTransitionTime":"2025-12-05T01:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.223882 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.223932 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.223945 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.223984 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.223996 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:20Z","lastTransitionTime":"2025-12-05T01:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.328056 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.328119 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.328138 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.328167 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.328188 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:20Z","lastTransitionTime":"2025-12-05T01:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.431971 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.432042 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.432239 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.432257 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.432268 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:20Z","lastTransitionTime":"2025-12-05T01:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.535561 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.535610 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.535620 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.535640 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.535650 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:20Z","lastTransitionTime":"2025-12-05T01:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.637851 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.637908 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.637921 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.637942 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.637955 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:20Z","lastTransitionTime":"2025-12-05T01:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.740512 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.740557 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.740575 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.740594 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.740607 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:20Z","lastTransitionTime":"2025-12-05T01:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.842796 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.842832 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.842842 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.842856 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.842865 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:20Z","lastTransitionTime":"2025-12-05T01:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.930061 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.930069 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.930199 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:09:20 crc kubenswrapper[4990]: E1205 01:09:20.930283 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:09:20 crc kubenswrapper[4990]: E1205 01:09:20.930495 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:09:20 crc kubenswrapper[4990]: E1205 01:09:20.930587 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.944910 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.944946 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.944958 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.944975 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:20 crc kubenswrapper[4990]: I1205 01:09:20.944985 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:20Z","lastTransitionTime":"2025-12-05T01:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.048403 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.048462 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.048473 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.048499 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.048508 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:21Z","lastTransitionTime":"2025-12-05T01:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.151118 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.151163 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.151173 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.151189 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.151200 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:21Z","lastTransitionTime":"2025-12-05T01:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.253751 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.253808 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.253821 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.253839 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.253850 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:21Z","lastTransitionTime":"2025-12-05T01:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.356293 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.356366 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.356392 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.356423 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.356448 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:21Z","lastTransitionTime":"2025-12-05T01:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.459654 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.460034 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.460188 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.460366 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.460548 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:21Z","lastTransitionTime":"2025-12-05T01:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.563931 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.563993 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.564006 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.564026 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.564040 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:21Z","lastTransitionTime":"2025-12-05T01:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.667321 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.667387 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.667410 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.667444 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.667469 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:21Z","lastTransitionTime":"2025-12-05T01:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.770518 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.770572 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.770587 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.770606 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.770619 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:21Z","lastTransitionTime":"2025-12-05T01:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.874131 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.874506 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.874545 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.874570 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.874584 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:21Z","lastTransitionTime":"2025-12-05T01:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.929462 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:09:21 crc kubenswrapper[4990]: E1205 01:09:21.929971 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.947427 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae199f42-bfab-4367-aadd-54f3ab99b342\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd1e00c990d5f61ca755a13e8fb3a9e841975edc5dea3e2a51f715d2556c1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f954071f194ae52b5b005d748ce92ac2507ac58868aa9fadcf9afcf9b9d8f71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5abd03392e388089cf716a7ea2eea41895e742cd173a3b217bbbd555e62c237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eac40a101194d00ec5ee1c7595a4d0baecbf61dda5ae671e03df521f2397a22c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eac40a101194d00ec5ee1c7595a4d0baecbf61dda5ae671e03df521f2397a22c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:21Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.964001 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:21Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.977273 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.977316 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.977325 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.977342 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.977352 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:21Z","lastTransitionTime":"2025-12-05T01:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.985421 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:21Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:21 crc kubenswrapper[4990]: I1205 01:09:21.999461 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:21Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.019607 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:22Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.037297 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:22Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.059769 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:22Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.078791 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:22Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.079825 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.079875 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.079891 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.079918 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.079933 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:22Z","lastTransitionTime":"2025-12-05T01:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.093076 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:22Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.106565 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:22Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.122402 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531a3fe35c53fc02d57c85ec09e66ca43962c444bf7a59abb676020240ed91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:22Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.137778 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8617140c-972f-4ec0-b814-350305fff19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd7abb19014a2e2eb714140cf0544b6dda5e8e729c0762950b39733a53a8981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5313c74541ba3388246397e45fa492ca200b8897142d8a648b0c34c7c576a559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pss5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:22Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.150655 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a14f02-7a4f-422f-a8c6-c611162c1bd3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407f3b7963c007e3baf021afe67f7c9836422245e3a9e89a2277ec1d98ff27d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1a86cd848696e6b8d7296270d5d8b58ffca056fe95db283a4f494c87684ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5844498d95e908f579a1ffcd6d0ba838c470c12cad2dfd89e8d4df5f7931cfbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6afe31db8331e521d8c92b070693d2997a7a26483bf351586b38ebb5869b53b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:22Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.165556 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:22Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.183009 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.183103 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.183135 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.183172 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.183199 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:22Z","lastTransitionTime":"2025-12-05T01:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.193457 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb074f137b0178e371feb06c41b7d35c06495f909d11e2388ff1528b3933b11f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb074f137b0178e371feb06c41b7d35c06495f909d11e2388ff1528b3933b11f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T01:09:12Z\\\",\\\"message\\\":\\\"1:09:12.948727 6579 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 01:09:12.949116 6579 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 01:09:12.949284 6579 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 01:09:12.949455 6579 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1205 01:09:12.949992 6579 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 01:09:12.950108 6579 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 01:09:12.950184 6579 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1205 01:09:12.950190 6579 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1205 01:09:12.950213 6579 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1205 01:09:12.950272 6579 handler.go:208] Removed *v1.Node event handler 7\\\\nI1205 01:09:12.950282 6579 factory.go:656] Stopping watch factory\\\\nI1205 01:09:12.950300 6579 handler.go:208] Removed *v1.Node event handler 2\\\\nI1205 01:09:12.950310 6579 ovnkube.go:599] Stopped ovnkube\\\\nI1205 01:09:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:09:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4w6g9_openshift-ovn-kubernetes(3eeec70d-1c5c-434e-90bc-95620458151c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:22Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.206343 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vlg2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccea29b1-256e-417e-985e-ca477e0b8d7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5535c15a27a700aeea04a7cd4a8bec6709fe1de66fc4dc8e9edb9d7d00900ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vlg2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:22Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.216563 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bxb6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7760172e-33aa-4de9-bd10-6a92c0851c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqpsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqpsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bxb6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:22Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.285599 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.285639 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.285649 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.285664 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.285674 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:22Z","lastTransitionTime":"2025-12-05T01:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.387897 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.388226 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.388342 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.388452 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.388574 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:22Z","lastTransitionTime":"2025-12-05T01:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.491155 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.491208 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.491221 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.491240 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.491252 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:22Z","lastTransitionTime":"2025-12-05T01:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.594076 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.594129 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.594141 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.594161 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.594173 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:22Z","lastTransitionTime":"2025-12-05T01:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.697649 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.697710 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.697725 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.697746 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.697761 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:22Z","lastTransitionTime":"2025-12-05T01:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.800673 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.800718 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.800728 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.800744 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.800755 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:22Z","lastTransitionTime":"2025-12-05T01:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.903214 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.903254 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.903264 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.903281 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.903290 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:22Z","lastTransitionTime":"2025-12-05T01:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.929830 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.929881 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:09:22 crc kubenswrapper[4990]: I1205 01:09:22.929936 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:09:22 crc kubenswrapper[4990]: E1205 01:09:22.930006 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:09:22 crc kubenswrapper[4990]: E1205 01:09:22.930101 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:09:22 crc kubenswrapper[4990]: E1205 01:09:22.930280 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.005865 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.005905 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.005916 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.005933 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.005948 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:23Z","lastTransitionTime":"2025-12-05T01:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.109274 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.109333 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.109349 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.109372 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.109388 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:23Z","lastTransitionTime":"2025-12-05T01:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.213016 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.213081 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.213098 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.213123 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.213139 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:23Z","lastTransitionTime":"2025-12-05T01:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.315602 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.315640 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.315650 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.315666 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.315678 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:23Z","lastTransitionTime":"2025-12-05T01:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.418796 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.418836 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.418849 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.418868 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.418880 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:23Z","lastTransitionTime":"2025-12-05T01:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.522135 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.522180 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.522191 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.522208 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.522219 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:23Z","lastTransitionTime":"2025-12-05T01:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.625556 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.625620 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.625640 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.625664 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.625684 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:23Z","lastTransitionTime":"2025-12-05T01:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.728463 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.728536 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.728549 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.728570 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.728586 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:23Z","lastTransitionTime":"2025-12-05T01:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.831843 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.831910 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.831930 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.831961 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.831979 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:23Z","lastTransitionTime":"2025-12-05T01:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.930119 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:09:23 crc kubenswrapper[4990]: E1205 01:09:23.930354 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.934181 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.934217 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.934234 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.934255 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:23 crc kubenswrapper[4990]: I1205 01:09:23.934277 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:23Z","lastTransitionTime":"2025-12-05T01:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.038190 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.038254 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.038265 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.038283 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.038296 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:24Z","lastTransitionTime":"2025-12-05T01:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.142886 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.142961 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.142977 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.143004 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.143025 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:24Z","lastTransitionTime":"2025-12-05T01:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.246924 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.246991 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.247010 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.247037 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.247058 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:24Z","lastTransitionTime":"2025-12-05T01:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.351225 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.351354 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.351378 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.351434 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.351458 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:24Z","lastTransitionTime":"2025-12-05T01:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.454641 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.454715 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.454745 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.454783 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.454809 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:24Z","lastTransitionTime":"2025-12-05T01:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.558040 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.558098 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.558117 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.558143 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.558177 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:24Z","lastTransitionTime":"2025-12-05T01:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.661005 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.661074 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.661096 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.661125 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.661147 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:24Z","lastTransitionTime":"2025-12-05T01:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.765868 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.765972 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.765991 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.766015 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.766037 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:24Z","lastTransitionTime":"2025-12-05T01:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.870075 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.870139 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.870152 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.870174 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.870189 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:24Z","lastTransitionTime":"2025-12-05T01:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.929609 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.929662 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.929634 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:09:24 crc kubenswrapper[4990]: E1205 01:09:24.929881 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:09:24 crc kubenswrapper[4990]: E1205 01:09:24.930245 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:09:24 crc kubenswrapper[4990]: E1205 01:09:24.930465 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.930625 4990 scope.go:117] "RemoveContainer" containerID="eb074f137b0178e371feb06c41b7d35c06495f909d11e2388ff1528b3933b11f" Dec 05 01:09:24 crc kubenswrapper[4990]: E1205 01:09:24.930811 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4w6g9_openshift-ovn-kubernetes(3eeec70d-1c5c-434e-90bc-95620458151c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.973531 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.973605 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.973633 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.973661 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:24 crc kubenswrapper[4990]: I1205 01:09:24.973680 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:24Z","lastTransitionTime":"2025-12-05T01:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.076932 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.077026 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.077048 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.077078 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.077099 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:25Z","lastTransitionTime":"2025-12-05T01:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.180496 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.180544 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.180555 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.180575 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.180591 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:25Z","lastTransitionTime":"2025-12-05T01:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.283566 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.283620 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.283632 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.283655 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.283669 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:25Z","lastTransitionTime":"2025-12-05T01:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.385921 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.385975 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.385991 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.386016 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.386040 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:25Z","lastTransitionTime":"2025-12-05T01:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.489555 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.489648 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.489664 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.489693 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.489717 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:25Z","lastTransitionTime":"2025-12-05T01:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.592108 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.592143 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.592152 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.592176 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.592189 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:25Z","lastTransitionTime":"2025-12-05T01:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.695757 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.695831 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.695855 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.695926 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.696006 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:25Z","lastTransitionTime":"2025-12-05T01:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.799698 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.799749 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.799761 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.799780 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.799795 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:25Z","lastTransitionTime":"2025-12-05T01:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.902818 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.902870 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.902878 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.902899 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.902910 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:25Z","lastTransitionTime":"2025-12-05T01:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:25 crc kubenswrapper[4990]: I1205 01:09:25.929936 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:09:25 crc kubenswrapper[4990]: E1205 01:09:25.930162 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.006093 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.006413 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.006539 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.006677 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.006796 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:26Z","lastTransitionTime":"2025-12-05T01:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.045944 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.046011 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.046030 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.046058 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.046078 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:26Z","lastTransitionTime":"2025-12-05T01:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:26 crc kubenswrapper[4990]: E1205 01:09:26.068364 4990 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2415bd45-5145-44bb-b5a4-8197e19c19f6\\\",\\\"systemUUID\\\":\\\"ce964c17-1cf3-4471-84ac-c2fc1079c2f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:26Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.073947 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.074180 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.074260 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.074396 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.074558 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:26Z","lastTransitionTime":"2025-12-05T01:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:26 crc kubenswrapper[4990]: E1205 01:09:26.094785 4990 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2415bd45-5145-44bb-b5a4-8197e19c19f6\\\",\\\"systemUUID\\\":\\\"ce964c17-1cf3-4471-84ac-c2fc1079c2f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:26Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.100675 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.100717 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.100734 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.100759 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.100774 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:26Z","lastTransitionTime":"2025-12-05T01:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:26 crc kubenswrapper[4990]: E1205 01:09:26.116789 4990 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2415bd45-5145-44bb-b5a4-8197e19c19f6\\\",\\\"systemUUID\\\":\\\"ce964c17-1cf3-4471-84ac-c2fc1079c2f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:26Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.121704 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.121777 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.121793 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.121815 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.121828 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:26Z","lastTransitionTime":"2025-12-05T01:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:26 crc kubenswrapper[4990]: E1205 01:09:26.140084 4990 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2415bd45-5145-44bb-b5a4-8197e19c19f6\\\",\\\"systemUUID\\\":\\\"ce964c17-1cf3-4471-84ac-c2fc1079c2f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:26Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.145240 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.145606 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.145688 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.145776 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.145840 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:26Z","lastTransitionTime":"2025-12-05T01:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:26 crc kubenswrapper[4990]: E1205 01:09:26.162597 4990 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2415bd45-5145-44bb-b5a4-8197e19c19f6\\\",\\\"systemUUID\\\":\\\"ce964c17-1cf3-4471-84ac-c2fc1079c2f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:26Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:26 crc kubenswrapper[4990]: E1205 01:09:26.162935 4990 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.164907 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.164948 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.164958 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.165007 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.165020 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:26Z","lastTransitionTime":"2025-12-05T01:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.268036 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.268094 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.268110 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.268133 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.268147 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:26Z","lastTransitionTime":"2025-12-05T01:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.374590 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.374649 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.374662 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.374682 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.374695 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:26Z","lastTransitionTime":"2025-12-05T01:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.478098 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.478158 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.478169 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.478194 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.478208 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:26Z","lastTransitionTime":"2025-12-05T01:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.581646 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.581710 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.581746 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.581778 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.581800 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:26Z","lastTransitionTime":"2025-12-05T01:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.684682 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.684738 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.684755 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.684778 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.684796 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:26Z","lastTransitionTime":"2025-12-05T01:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.787916 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.787967 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.787977 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.787998 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.788027 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:26Z","lastTransitionTime":"2025-12-05T01:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.890941 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.890985 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.890996 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.891014 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.891023 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:26Z","lastTransitionTime":"2025-12-05T01:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.929737 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.929868 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:09:26 crc kubenswrapper[4990]: E1205 01:09:26.929935 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.929763 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:09:26 crc kubenswrapper[4990]: E1205 01:09:26.930125 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:09:26 crc kubenswrapper[4990]: E1205 01:09:26.930235 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.993927 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.993978 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.993992 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.994010 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:26 crc kubenswrapper[4990]: I1205 01:09:26.994062 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:26Z","lastTransitionTime":"2025-12-05T01:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.097216 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.097274 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.097287 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.097310 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.097324 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:27Z","lastTransitionTime":"2025-12-05T01:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.200560 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.200646 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.200680 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.200714 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.200738 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:27Z","lastTransitionTime":"2025-12-05T01:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.303654 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.303699 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.303707 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.303722 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.303733 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:27Z","lastTransitionTime":"2025-12-05T01:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.406635 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.406703 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.406722 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.406747 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.406765 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:27Z","lastTransitionTime":"2025-12-05T01:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.508870 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.508916 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.508928 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.508948 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.508957 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:27Z","lastTransitionTime":"2025-12-05T01:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.612003 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.612056 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.612069 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.612088 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.612102 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:27Z","lastTransitionTime":"2025-12-05T01:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.715087 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.715149 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.715166 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.715191 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.715208 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:27Z","lastTransitionTime":"2025-12-05T01:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.818034 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.818088 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.818104 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.818127 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.818143 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:27Z","lastTransitionTime":"2025-12-05T01:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.921932 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.922015 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.922027 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.922049 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.922060 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:27Z","lastTransitionTime":"2025-12-05T01:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:27 crc kubenswrapper[4990]: I1205 01:09:27.929344 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:09:27 crc kubenswrapper[4990]: E1205 01:09:27.929475 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.025580 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.025635 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.025651 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.025674 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.025696 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:28Z","lastTransitionTime":"2025-12-05T01:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.129221 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.129312 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.129331 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.129363 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.129386 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:28Z","lastTransitionTime":"2025-12-05T01:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.148714 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7760172e-33aa-4de9-bd10-6a92c0851c6e-metrics-certs\") pod \"network-metrics-daemon-bxb6s\" (UID: \"7760172e-33aa-4de9-bd10-6a92c0851c6e\") " pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:09:28 crc kubenswrapper[4990]: E1205 01:09:28.148876 4990 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 01:09:28 crc kubenswrapper[4990]: E1205 01:09:28.148952 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7760172e-33aa-4de9-bd10-6a92c0851c6e-metrics-certs podName:7760172e-33aa-4de9-bd10-6a92c0851c6e nodeName:}" failed. No retries permitted until 2025-12-05 01:10:00.148933439 +0000 UTC m=+98.525148800 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7760172e-33aa-4de9-bd10-6a92c0851c6e-metrics-certs") pod "network-metrics-daemon-bxb6s" (UID: "7760172e-33aa-4de9-bd10-6a92c0851c6e") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.233297 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.233355 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.233365 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.233388 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.233402 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:28Z","lastTransitionTime":"2025-12-05T01:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.337423 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.337554 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.337575 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.337604 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.337625 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:28Z","lastTransitionTime":"2025-12-05T01:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.440527 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.440572 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.440583 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.440603 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.440625 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:28Z","lastTransitionTime":"2025-12-05T01:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.543600 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.543657 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.543676 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.543702 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.543719 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:28Z","lastTransitionTime":"2025-12-05T01:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.646903 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.646959 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.646972 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.646992 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.647009 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:28Z","lastTransitionTime":"2025-12-05T01:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.750576 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.750633 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.750645 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.750664 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.750677 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:28Z","lastTransitionTime":"2025-12-05T01:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.853731 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.853834 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.853857 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.853884 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.853902 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:28Z","lastTransitionTime":"2025-12-05T01:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.929595 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.929708 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:09:28 crc kubenswrapper[4990]: E1205 01:09:28.929804 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.929632 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:09:28 crc kubenswrapper[4990]: E1205 01:09:28.929986 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:09:28 crc kubenswrapper[4990]: E1205 01:09:28.930123 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.962815 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.962904 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.962921 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.962945 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:28 crc kubenswrapper[4990]: I1205 01:09:28.962963 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:28Z","lastTransitionTime":"2025-12-05T01:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.065672 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.065757 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.065780 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.065812 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.065831 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:29Z","lastTransitionTime":"2025-12-05T01:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.169586 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.169635 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.169645 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.169674 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.169686 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:29Z","lastTransitionTime":"2025-12-05T01:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.273894 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.273982 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.274001 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.274031 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.274055 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:29Z","lastTransitionTime":"2025-12-05T01:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.377234 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.377334 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.377385 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.377415 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.377462 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:29Z","lastTransitionTime":"2025-12-05T01:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.480703 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.480758 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.480773 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.480794 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.480809 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:29Z","lastTransitionTime":"2025-12-05T01:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.583508 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.583554 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.583566 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.583584 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.583596 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:29Z","lastTransitionTime":"2025-12-05T01:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.687276 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.687345 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.687365 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.687393 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.687412 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:29Z","lastTransitionTime":"2025-12-05T01:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.790549 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.790618 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.790654 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.790692 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.790719 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:29Z","lastTransitionTime":"2025-12-05T01:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.894720 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.894768 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.894778 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.894795 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.894805 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:29Z","lastTransitionTime":"2025-12-05T01:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.930366 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:09:29 crc kubenswrapper[4990]: E1205 01:09:29.930554 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.997616 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.997669 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.997682 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.997697 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:29 crc kubenswrapper[4990]: I1205 01:09:29.997708 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:29Z","lastTransitionTime":"2025-12-05T01:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.100676 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.100746 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.100768 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.100802 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.100822 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:30Z","lastTransitionTime":"2025-12-05T01:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.208402 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.208470 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.208520 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.208558 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.208584 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:30Z","lastTransitionTime":"2025-12-05T01:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.311576 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.311625 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.311638 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.311673 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.311687 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:30Z","lastTransitionTime":"2025-12-05T01:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.379329 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rdhk7_c4914133-b0cd-4d12-84d5-c99379e2324a/kube-multus/0.log" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.379387 4990 generic.go:334] "Generic (PLEG): container finished" podID="c4914133-b0cd-4d12-84d5-c99379e2324a" containerID="65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566" exitCode=1 Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.379427 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rdhk7" event={"ID":"c4914133-b0cd-4d12-84d5-c99379e2324a","Type":"ContainerDied","Data":"65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566"} Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.379893 4990 scope.go:117] "RemoveContainer" containerID="65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.401190 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531a3fe35c53fc02d57c85ec09e66ca43962c444bf7a59abb676020240ed91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:30Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.414206 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.414495 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.414731 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.414848 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.414932 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:30Z","lastTransitionTime":"2025-12-05T01:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.416216 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8617140c-972f-4ec0-b814-350305fff19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd7abb19014a2e2eb714140cf0544b6dda5e8e729c0762950b39733a53a8981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5313c74541ba3388246397e45fa492ca200b8897142d8a648b0c34c7c576a559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pss5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:30Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.432081 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:30Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.444603 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vlg2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccea29b1-256e-417e-985e-ca477e0b8d7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5535c15a27a700aeea04a7cd4a8bec6709fe1de66fc4dc8e9edb9d7d00900ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vlg2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:30Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.458046 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bxb6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7760172e-33aa-4de9-bd10-6a92c0851c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqpsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqpsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bxb6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:30Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.475749 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a14f02-7a4f-422f-a8c6-c611162c1bd3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407f3b7963c007e3baf021afe67f7c9836422245e3a9e89a2277ec1d98ff27d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1a86cd848696e6b8d7296270d5d8b58ffca056fe95db283a4f494c87684ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5844498d95e908f579a1ffcd6d0ba838c470c12cad2dfd89e8d4df5f7931cfbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6afe31db8331e521d8c92b070693d2997a7a26483bf351586b38ebb5869b53b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:30Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.496889 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T01:09:30Z\\\",\\\"message\\\":\\\"2025-12-05T01:08:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3bee9049-22eb-4bcb-b87f-20a3e9c162ee\\\\n2025-12-05T01:08:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3bee9049-22eb-4bcb-b87f-20a3e9c162ee to /host/opt/cni/bin/\\\\n2025-12-05T01:08:45Z [verbose] multus-daemon started\\\\n2025-12-05T01:08:45Z [verbose] Readiness Indicator file check\\\\n2025-12-05T01:09:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:30Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.518465 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.518538 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.518552 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.518573 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.518592 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:30Z","lastTransitionTime":"2025-12-05T01:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.523322 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb074f137b0178e371feb06c41b7d35c06495f909d11e2388ff1528b3933b11f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb074f137b0178e371feb06c41b7d35c06495f909d11e2388ff1528b3933b11f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T01:09:12Z\\\",\\\"message\\\":\\\"1:09:12.948727 6579 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 01:09:12.949116 6579 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 01:09:12.949284 6579 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 01:09:12.949455 6579 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1205 01:09:12.949992 6579 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 01:09:12.950108 6579 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 01:09:12.950184 6579 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1205 01:09:12.950190 6579 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1205 01:09:12.950213 6579 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1205 01:09:12.950272 6579 handler.go:208] Removed *v1.Node event handler 7\\\\nI1205 01:09:12.950282 6579 factory.go:656] Stopping watch factory\\\\nI1205 01:09:12.950300 6579 handler.go:208] Removed *v1.Node event handler 2\\\\nI1205 01:09:12.950310 6579 ovnkube.go:599] Stopped ovnkube\\\\nI1205 01:09:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:09:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4w6g9_openshift-ovn-kubernetes(3eeec70d-1c5c-434e-90bc-95620458151c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:30Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.535566 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:30Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.550034 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae199f42-bfab-4367-aadd-54f3ab99b342\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd1e00c990d5f61ca755a13e8fb3a9e841975edc5dea3e2a51f715d2556c1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f954071f194ae52b5b005d748ce92ac2507ac58868aa9fadcf9afcf9b9d8f71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5abd03392e388089cf716a7ea2eea41895e742cd173a3b217bbbd555e62c237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eac40a101194d00ec5ee1c7595a4d0baecbf61dda5ae671e03df521f2397a22c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eac40a101194d00ec5ee1c7595a4d0baecbf61dda5ae671e03df521f2397a22c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:30Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.565100 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:30Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.579119 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:30Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.592533 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:30Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.604689 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:30Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.619845 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:30Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.621267 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.621312 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.621328 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.621358 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.621381 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:30Z","lastTransitionTime":"2025-12-05T01:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.641913 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:30Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.653826 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:30Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.724433 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.724511 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.724526 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.724547 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.724560 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:30Z","lastTransitionTime":"2025-12-05T01:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.828004 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.828049 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.828059 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.828078 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.828090 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:30Z","lastTransitionTime":"2025-12-05T01:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.929434 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:09:30 crc kubenswrapper[4990]: E1205 01:09:30.929649 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.929732 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:09:30 crc kubenswrapper[4990]: E1205 01:09:30.929789 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.929850 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:09:30 crc kubenswrapper[4990]: E1205 01:09:30.929904 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.931701 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.931753 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.931766 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.931781 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:30 crc kubenswrapper[4990]: I1205 01:09:30.931795 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:30Z","lastTransitionTime":"2025-12-05T01:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.034317 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.034383 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.034402 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.034431 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.034453 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:31Z","lastTransitionTime":"2025-12-05T01:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.138351 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.138423 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.138435 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.138450 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.138807 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:31Z","lastTransitionTime":"2025-12-05T01:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.241762 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.241819 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.241833 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.241852 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.241864 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:31Z","lastTransitionTime":"2025-12-05T01:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.343839 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.343899 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.343927 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.343945 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.343958 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:31Z","lastTransitionTime":"2025-12-05T01:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.385785 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rdhk7_c4914133-b0cd-4d12-84d5-c99379e2324a/kube-multus/0.log" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.385859 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rdhk7" event={"ID":"c4914133-b0cd-4d12-84d5-c99379e2324a","Type":"ContainerStarted","Data":"2cb934aa0cb867865c3cc63541e39eaa488349656fdbb8df851d66001a971602"} Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.404781 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a14f02-7a4f-422f-a8c6-c611162c1bd3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407f3b7963c007e3baf021afe67f7c9836422245e3a9e89a2277ec1d98ff27d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1a86cd848696e6b8d7296270d5d8b58ffca056fe95db283a4f494c87684ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5844498d95e908f579a1ffcd6d0ba838c470c12cad2dfd89e8d4df5f7931cfbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6afe31db8331e521d8c92b070693d2997a7a26483bf351586b38ebb5869b53b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:31Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.425074 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb934aa0cb867865c3cc63541e39eaa488349656fdbb8df851d66001a971602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T01:09:30Z\\\",\\\"message\\\":\\\"2025-12-05T01:08:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3bee9049-22eb-4bcb-b87f-20a3e9c162ee\\\\n2025-12-05T01:08:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3bee9049-22eb-4bcb-b87f-20a3e9c162ee to /host/opt/cni/bin/\\\\n2025-12-05T01:08:45Z [verbose] multus-daemon started\\\\n2025-12-05T01:08:45Z [verbose] Readiness Indicator file check\\\\n2025-12-05T01:09:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:31Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.446572 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.446606 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.446617 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.446630 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.446640 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:31Z","lastTransitionTime":"2025-12-05T01:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.453291 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb074f137b0178e371feb06c41b7d35c06495f909d11e2388ff1528b3933b11f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb074f137b0178e371feb06c41b7d35c06495f909d11e2388ff1528b3933b11f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T01:09:12Z\\\",\\\"message\\\":\\\"1:09:12.948727 6579 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 01:09:12.949116 6579 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 01:09:12.949284 6579 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 01:09:12.949455 6579 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1205 01:09:12.949992 6579 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 01:09:12.950108 6579 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 01:09:12.950184 6579 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1205 01:09:12.950190 6579 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1205 01:09:12.950213 6579 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1205 01:09:12.950272 6579 handler.go:208] Removed *v1.Node event handler 7\\\\nI1205 01:09:12.950282 6579 factory.go:656] Stopping watch factory\\\\nI1205 01:09:12.950300 6579 handler.go:208] Removed *v1.Node event handler 2\\\\nI1205 01:09:12.950310 6579 ovnkube.go:599] Stopped ovnkube\\\\nI1205 01:09:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:09:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4w6g9_openshift-ovn-kubernetes(3eeec70d-1c5c-434e-90bc-95620458151c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:31Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.470745 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vlg2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccea29b1-256e-417e-985e-ca477e0b8d7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5535c15a27a700aeea04a7cd4a8bec6709fe1de66fc4dc8e9edb9d7d00900ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vlg2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:31Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.482138 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bxb6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7760172e-33aa-4de9-bd10-6a92c0851c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqpsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqpsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bxb6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:31Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.496898 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae199f42-bfab-4367-aadd-54f3ab99b342\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd1e00c990d5f61ca755a13e8fb3a9e841975edc5dea3e2a51f715d2556c1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f954071f194ae52b5b005d748ce92ac2507ac58868aa9fadcf9afcf9b9d8f71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5abd03392e388089cf716a7ea2eea41895e742cd173a3b217bbbd555e62c237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eac40a101194d00ec5ee1c7595a4d0baecbf61dda5ae671e03df521f2397a22c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eac40a101194d00ec5ee1c7595a4d0baecbf61dda5ae671e03df521f2397a22c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:31Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.510829 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:31Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.524673 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:31Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.535453 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:31Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.548424 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.548465 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.548475 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.548503 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.548515 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:31Z","lastTransitionTime":"2025-12-05T01:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.554762 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:31Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.574341 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:31Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.593282 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:31Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.615116 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:31Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.635234 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:31Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.650718 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.650922 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.651019 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.651125 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.651367 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:31Z","lastTransitionTime":"2025-12-05T01:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.653352 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:31Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.670291 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531a3fe35c53fc02d57c85ec09e66ca43962c444bf7a59abb676020240ed91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:31Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.684287 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8617140c-972f-4ec0-b814-350305fff19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd7abb19014a2e2eb714140cf0544b6dda5e8e729c0762950b39733a53a8981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5313c74541ba3388246397e45fa492ca200b8897142d8a648b0c34c7c576a559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pss5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:31Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.754250 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.754535 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.754625 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.754721 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.754800 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:31Z","lastTransitionTime":"2025-12-05T01:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.857983 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.858044 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.858058 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.858082 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.858100 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:31Z","lastTransitionTime":"2025-12-05T01:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.930010 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:09:31 crc kubenswrapper[4990]: E1205 01:09:31.930244 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.950167 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:31Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.960662 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.960691 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.960703 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.960721 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.960737 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:31Z","lastTransitionTime":"2025-12-05T01:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.967656 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531a3fe35c53fc02d57c85ec09e66ca43962c444bf7a59abb676020240ed91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:31Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:31 crc kubenswrapper[4990]: I1205 01:09:31.979467 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8617140c-972f-4ec0-b814-350305fff19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd7abb19014a2e2eb714140cf0544b6dda5e8e729c0762950b39733a53a8981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5313c74541ba3388246397e45fa492ca200b8897142d8a648b0c34c7c576a559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pss5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:31Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.000279 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a14f02-7a4f-422f-a8c6-c611162c1bd3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407f3b7963c007e3baf021afe67f7c9836422245e3a9e89a2277ec1d98ff27d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1a86cd848696e6b8d7296270d5d8b58ffca056fe95db283a4f494c87684ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5844498d95e908f579a1ffcd6d0ba838c470c12cad2dfd89e8d4df5f7931cfbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6afe31db8331e521d8c92b070693d2997a7a26483bf351586b38ebb5869b53b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:31Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.017134 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb934aa0cb867865c3cc63541e39eaa488349656fdbb8df851d66001a971602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T01:09:30Z\\\",\\\"message\\\":\\\"2025-12-05T01:08:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3bee9049-22eb-4bcb-b87f-20a3e9c162ee\\\\n2025-12-05T01:08:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3bee9049-22eb-4bcb-b87f-20a3e9c162ee to /host/opt/cni/bin/\\\\n2025-12-05T01:08:45Z [verbose] multus-daemon started\\\\n2025-12-05T01:08:45Z [verbose] Readiness Indicator file check\\\\n2025-12-05T01:09:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:32Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.034686 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb074f137b0178e371feb06c41b7d35c06495f909d11e2388ff1528b3933b11f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb074f137b0178e371feb06c41b7d35c06495f909d11e2388ff1528b3933b11f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T01:09:12Z\\\",\\\"message\\\":\\\"1:09:12.948727 6579 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 01:09:12.949116 6579 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 01:09:12.949284 6579 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 01:09:12.949455 6579 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1205 01:09:12.949992 6579 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 01:09:12.950108 6579 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 01:09:12.950184 6579 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1205 01:09:12.950190 6579 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1205 01:09:12.950213 6579 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1205 01:09:12.950272 6579 handler.go:208] Removed *v1.Node event handler 7\\\\nI1205 01:09:12.950282 6579 factory.go:656] Stopping watch factory\\\\nI1205 01:09:12.950300 6579 handler.go:208] Removed *v1.Node event handler 2\\\\nI1205 01:09:12.950310 6579 ovnkube.go:599] Stopped ovnkube\\\\nI1205 01:09:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:09:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4w6g9_openshift-ovn-kubernetes(3eeec70d-1c5c-434e-90bc-95620458151c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:32Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.046815 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vlg2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccea29b1-256e-417e-985e-ca477e0b8d7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5535c15a27a700aeea04a7cd4a8bec6709fe1de66fc4dc8e9edb9d7d00900ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vlg2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:32Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.057948 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bxb6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7760172e-33aa-4de9-bd10-6a92c0851c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqpsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqpsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bxb6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:32Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.062788 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.062834 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.062845 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.062864 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.062876 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:32Z","lastTransitionTime":"2025-12-05T01:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.072787 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae199f42-bfab-4367-aadd-54f3ab99b342\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd1e00c990d5f61ca755a13e8fb3a9e841975edc5dea3e2a51f715d2556c1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f954071f194ae52b5b005d748ce92ac2507ac58868aa9fadcf9afcf9b9d8f71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5abd03392e388089cf716a7ea2eea41895e742cd173a3b217bbbd555e62c237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eac40a101194d00ec5ee1c7595a4d0baecbf61dda5ae671e03df521f2397a22c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eac40a101194d00ec5ee1c7595a4d0baecbf61dda5ae671e03df521f2397a22c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:32Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.088033 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:32Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.100295 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:32Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.110149 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:32Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.126941 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:32Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.140606 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:32Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.154018 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:32Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.165112 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.165151 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.165162 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.165177 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.165187 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:32Z","lastTransitionTime":"2025-12-05T01:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.171139 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:32Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.184822 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:32Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.266683 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.266728 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.266738 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.266753 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.266764 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:32Z","lastTransitionTime":"2025-12-05T01:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.369853 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.369908 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.369919 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.369937 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.369947 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:32Z","lastTransitionTime":"2025-12-05T01:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.471943 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.471982 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.471992 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.472006 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.472015 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:32Z","lastTransitionTime":"2025-12-05T01:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.574369 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.574427 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.574440 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.574461 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.574500 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:32Z","lastTransitionTime":"2025-12-05T01:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.677722 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.677764 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.677775 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.677791 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.677805 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:32Z","lastTransitionTime":"2025-12-05T01:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.780719 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.780794 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.780813 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.780839 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.780857 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:32Z","lastTransitionTime":"2025-12-05T01:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.884074 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.884171 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.884191 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.884216 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.884236 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:32Z","lastTransitionTime":"2025-12-05T01:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.929659 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.929710 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.929729 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:09:32 crc kubenswrapper[4990]: E1205 01:09:32.929835 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:09:32 crc kubenswrapper[4990]: E1205 01:09:32.929934 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:09:32 crc kubenswrapper[4990]: E1205 01:09:32.930027 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.987097 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.987194 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.987220 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.987253 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:32 crc kubenswrapper[4990]: I1205 01:09:32.987278 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:32Z","lastTransitionTime":"2025-12-05T01:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.089735 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.089779 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.089791 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.089810 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.089828 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:33Z","lastTransitionTime":"2025-12-05T01:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.192612 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.192671 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.192680 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.192699 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.192712 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:33Z","lastTransitionTime":"2025-12-05T01:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.295135 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.295175 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.295186 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.295206 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.295218 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:33Z","lastTransitionTime":"2025-12-05T01:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.397389 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.397442 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.397456 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.397476 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.397517 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:33Z","lastTransitionTime":"2025-12-05T01:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.501176 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.501265 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.501281 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.501301 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.501316 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:33Z","lastTransitionTime":"2025-12-05T01:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.604656 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.604709 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.604719 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.604736 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.604747 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:33Z","lastTransitionTime":"2025-12-05T01:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.708534 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.708585 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.708601 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.708623 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.708639 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:33Z","lastTransitionTime":"2025-12-05T01:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.812362 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.812430 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.812444 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.812469 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.812515 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:33Z","lastTransitionTime":"2025-12-05T01:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.915906 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.915957 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.915971 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.915992 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.916007 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:33Z","lastTransitionTime":"2025-12-05T01:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:33 crc kubenswrapper[4990]: I1205 01:09:33.929772 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:09:33 crc kubenswrapper[4990]: E1205 01:09:33.930003 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.019566 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.019607 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.019618 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.019638 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.019651 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:34Z","lastTransitionTime":"2025-12-05T01:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.122895 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.122963 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.122983 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.123011 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.123031 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:34Z","lastTransitionTime":"2025-12-05T01:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.225670 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.225713 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.225721 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.225737 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.225748 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:34Z","lastTransitionTime":"2025-12-05T01:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.328138 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.328200 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.328218 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.328244 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.328263 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:34Z","lastTransitionTime":"2025-12-05T01:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.431237 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.431278 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.431287 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.431308 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.431322 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:34Z","lastTransitionTime":"2025-12-05T01:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.533962 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.534032 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.534051 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.534076 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.534094 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:34Z","lastTransitionTime":"2025-12-05T01:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.637126 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.637170 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.637181 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.637201 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.637217 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:34Z","lastTransitionTime":"2025-12-05T01:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.739458 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.739566 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.739655 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.739682 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.739738 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:34Z","lastTransitionTime":"2025-12-05T01:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.842723 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.842764 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.842797 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.842818 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.842829 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:34Z","lastTransitionTime":"2025-12-05T01:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.929994 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.930040 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.930096 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:09:34 crc kubenswrapper[4990]: E1205 01:09:34.930252 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:09:34 crc kubenswrapper[4990]: E1205 01:09:34.930364 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:09:34 crc kubenswrapper[4990]: E1205 01:09:34.930437 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.944766 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.944814 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.944830 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.944850 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:34 crc kubenswrapper[4990]: I1205 01:09:34.944864 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:34Z","lastTransitionTime":"2025-12-05T01:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.047859 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.047914 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.047927 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.047948 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.047961 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:35Z","lastTransitionTime":"2025-12-05T01:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.150794 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.150835 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.150843 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.150859 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.150870 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:35Z","lastTransitionTime":"2025-12-05T01:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.253546 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.253620 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.253639 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.253667 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.253685 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:35Z","lastTransitionTime":"2025-12-05T01:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.356771 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.356840 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.356869 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.356902 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.356923 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:35Z","lastTransitionTime":"2025-12-05T01:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.459243 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.459288 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.459301 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.459318 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.459331 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:35Z","lastTransitionTime":"2025-12-05T01:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.561632 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.561670 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.561680 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.561696 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.561705 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:35Z","lastTransitionTime":"2025-12-05T01:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.664505 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.664550 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.664563 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.664582 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.664596 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:35Z","lastTransitionTime":"2025-12-05T01:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.766584 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.766628 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.766637 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.766653 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.766663 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:35Z","lastTransitionTime":"2025-12-05T01:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.869853 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.869916 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.869940 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.869966 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.869984 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:35Z","lastTransitionTime":"2025-12-05T01:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.930438 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:09:35 crc kubenswrapper[4990]: E1205 01:09:35.930616 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.973476 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.973570 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.973593 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.973623 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:35 crc kubenswrapper[4990]: I1205 01:09:35.973645 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:35Z","lastTransitionTime":"2025-12-05T01:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.076765 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.076802 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.076815 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.076831 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.076844 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:36Z","lastTransitionTime":"2025-12-05T01:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.180321 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.180372 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.180385 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.180405 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.180419 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:36Z","lastTransitionTime":"2025-12-05T01:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.266702 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.266768 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.266783 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.266806 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.266822 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:36Z","lastTransitionTime":"2025-12-05T01:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:36 crc kubenswrapper[4990]: E1205 01:09:36.283062 4990 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2415bd45-5145-44bb-b5a4-8197e19c19f6\\\",\\\"systemUUID\\\":\\\"ce964c17-1cf3-4471-84ac-c2fc1079c2f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:36Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.288230 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.288291 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.288303 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.288324 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.288339 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:36Z","lastTransitionTime":"2025-12-05T01:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:36 crc kubenswrapper[4990]: E1205 01:09:36.305264 4990 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2415bd45-5145-44bb-b5a4-8197e19c19f6\\\",\\\"systemUUID\\\":\\\"ce964c17-1cf3-4471-84ac-c2fc1079c2f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:36Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.310295 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.310375 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.310404 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.310450 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.310528 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:36Z","lastTransitionTime":"2025-12-05T01:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:36 crc kubenswrapper[4990]: E1205 01:09:36.328474 4990 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2415bd45-5145-44bb-b5a4-8197e19c19f6\\\",\\\"systemUUID\\\":\\\"ce964c17-1cf3-4471-84ac-c2fc1079c2f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:36Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.334786 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.334860 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.334880 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.334914 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.334940 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:36Z","lastTransitionTime":"2025-12-05T01:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:36 crc kubenswrapper[4990]: E1205 01:09:36.356172 4990 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2415bd45-5145-44bb-b5a4-8197e19c19f6\\\",\\\"systemUUID\\\":\\\"ce964c17-1cf3-4471-84ac-c2fc1079c2f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:36Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.360788 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.360904 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.360937 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.360973 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.360999 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:36Z","lastTransitionTime":"2025-12-05T01:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:36 crc kubenswrapper[4990]: E1205 01:09:36.379958 4990 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2415bd45-5145-44bb-b5a4-8197e19c19f6\\\",\\\"systemUUID\\\":\\\"ce964c17-1cf3-4471-84ac-c2fc1079c2f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:36Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:36 crc kubenswrapper[4990]: E1205 01:09:36.380326 4990 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.382723 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.382770 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.382782 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.382804 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.382819 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:36Z","lastTransitionTime":"2025-12-05T01:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.485791 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.485846 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.485858 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.485878 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.485890 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:36Z","lastTransitionTime":"2025-12-05T01:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.588974 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.589017 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.589033 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.589055 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.589073 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:36Z","lastTransitionTime":"2025-12-05T01:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.691861 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.691898 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.691908 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.691922 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.691932 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:36Z","lastTransitionTime":"2025-12-05T01:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.794550 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.794603 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.794619 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.794640 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.794651 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:36Z","lastTransitionTime":"2025-12-05T01:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.896667 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.896798 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.896813 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.896829 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.896842 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:36Z","lastTransitionTime":"2025-12-05T01:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.929929 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.930020 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:09:36 crc kubenswrapper[4990]: I1205 01:09:36.929929 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:09:36 crc kubenswrapper[4990]: E1205 01:09:36.930079 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:09:36 crc kubenswrapper[4990]: E1205 01:09:36.930143 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:09:36 crc kubenswrapper[4990]: E1205 01:09:36.930215 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:36.999878 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:36.999933 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:36.999950 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:36.999972 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:36.999991 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:36Z","lastTransitionTime":"2025-12-05T01:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.102109 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.102149 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.102160 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.102176 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.102188 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:37Z","lastTransitionTime":"2025-12-05T01:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.205323 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.205382 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.205397 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.205418 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.205439 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:37Z","lastTransitionTime":"2025-12-05T01:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.308464 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.308576 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.308601 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.308628 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.308654 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:37Z","lastTransitionTime":"2025-12-05T01:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.412305 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.412390 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.412408 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.412446 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.412467 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:37Z","lastTransitionTime":"2025-12-05T01:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.515046 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.515178 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.515203 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.515235 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.515260 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:37Z","lastTransitionTime":"2025-12-05T01:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.619329 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.619410 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.619436 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.619532 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.619563 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:37Z","lastTransitionTime":"2025-12-05T01:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.722831 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.722897 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.722917 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.722946 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.722963 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:37Z","lastTransitionTime":"2025-12-05T01:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.826794 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.826852 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.826865 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.826884 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.826898 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:37Z","lastTransitionTime":"2025-12-05T01:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.929856 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:09:37 crc kubenswrapper[4990]: E1205 01:09:37.930195 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.930576 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.930626 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.930635 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.930654 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.930670 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:37Z","lastTransitionTime":"2025-12-05T01:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:37 crc kubenswrapper[4990]: I1205 01:09:37.931663 4990 scope.go:117] "RemoveContainer" containerID="eb074f137b0178e371feb06c41b7d35c06495f909d11e2388ff1528b3933b11f" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.034162 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.034250 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.034285 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.034326 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.034351 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:38Z","lastTransitionTime":"2025-12-05T01:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.137057 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.137094 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.137106 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.137122 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.137136 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:38Z","lastTransitionTime":"2025-12-05T01:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.240364 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.240418 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.240432 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.240452 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.240466 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:38Z","lastTransitionTime":"2025-12-05T01:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.344415 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.344542 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.344561 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.344588 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.344611 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:38Z","lastTransitionTime":"2025-12-05T01:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.412835 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4w6g9_3eeec70d-1c5c-434e-90bc-95620458151c/ovnkube-controller/2.log" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.417038 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" event={"ID":"3eeec70d-1c5c-434e-90bc-95620458151c","Type":"ContainerStarted","Data":"ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b"} Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.417738 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.443353 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb934aa0cb867865c3cc63541e39eaa488349656fdbb8df851d66001a971602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T01:09:30Z\\\",\\\"message\\\":\\\"2025-12-05T01:08:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3bee9049-22eb-4bcb-b87f-20a3e9c162ee\\\\n2025-12-05T01:08:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3bee9049-22eb-4bcb-b87f-20a3e9c162ee to /host/opt/cni/bin/\\\\n2025-12-05T01:08:45Z [verbose] multus-daemon started\\\\n2025-12-05T01:08:45Z [verbose] Readiness Indicator file check\\\\n2025-12-05T01:09:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:38Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.448145 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.448269 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.448293 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.448321 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.448343 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:38Z","lastTransitionTime":"2025-12-05T01:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.473340 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb074f137b0178e371feb06c41b7d35c06495f909d11e2388ff1528b3933b11f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T01:09:12Z\\\",\\\"message\\\":\\\"1:09:12.948727 6579 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 01:09:12.949116 6579 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 01:09:12.949284 6579 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 01:09:12.949455 6579 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1205 01:09:12.949992 6579 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 01:09:12.950108 6579 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 01:09:12.950184 6579 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1205 01:09:12.950190 6579 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1205 01:09:12.950213 6579 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1205 01:09:12.950272 6579 handler.go:208] Removed *v1.Node event handler 7\\\\nI1205 01:09:12.950282 6579 factory.go:656] Stopping watch factory\\\\nI1205 01:09:12.950300 6579 handler.go:208] Removed *v1.Node event handler 2\\\\nI1205 01:09:12.950310 6579 ovnkube.go:599] Stopped ovnkube\\\\nI1205 01:09:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:09:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:38Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.488010 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vlg2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccea29b1-256e-417e-985e-ca477e0b8d7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5535c15a27a700aeea04a7cd4a8bec6709fe1de66fc4dc8e9edb9d7d00900ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vlg2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:38Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.511091 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bxb6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7760172e-33aa-4de9-bd10-6a92c0851c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqpsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqpsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bxb6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:38Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.525935 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a14f02-7a4f-422f-a8c6-c611162c1bd3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407f3b7963c007e3baf021afe67f7c9836422245e3a9e89a2277ec1d98ff27d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1a86cd848696e6b8d7296270d5d8b58ffca056fe95db283a4f494c87684ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5844498d95e908f579a1ffcd6d0ba838c470c12cad2dfd89e8d4df5f7931cfbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6afe31db8331e521d8c92b070693d2997a7a26483bf351586b38ebb5869b53b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:38Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.551464 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.551537 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.551549 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.551570 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.551554 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:38Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.551581 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:38Z","lastTransitionTime":"2025-12-05T01:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.571127 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:38Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.589812 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:38Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.609294 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae199f42-bfab-4367-aadd-54f3ab99b342\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd1e00c990d5f61ca755a13e8fb3a9e841975edc5dea3e2a51f715d2556c1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f954071f194ae52b5b005d748ce92ac2507ac58868aa9fadcf9afcf9b9d8f71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5abd03392e388089cf716a7ea2eea41895e742cd173a3b217bbbd555e62c237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eac40a101194d00ec5ee1c7595a4d0baecbf61dda5ae671e03df521f2397a22c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eac40a101194d00ec5ee1c7595a4d0baecbf61dda5ae671e03df521f2397a22c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:38Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.625841 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:38Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.638202 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:38Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.651334 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:38Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.654272 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.654306 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.654316 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.654331 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.654340 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:38Z","lastTransitionTime":"2025-12-05T01:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.665555 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:38Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.681522 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:38Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.695385 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:38Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.716868 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531a3fe35c53fc02d57c85ec09e66ca43962c444bf7a59abb676020240ed91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:38Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.733724 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8617140c-972f-4ec0-b814-350305fff19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd7abb19014a2e2eb714140cf0544b6dda5e8e729c0762950b39733a53a8981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5313c74541ba3388246397e45fa492ca200b8897142d8a648b0c34c7c576a559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pss5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:38Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.756625 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.756667 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.756675 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.756691 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.756702 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:38Z","lastTransitionTime":"2025-12-05T01:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.859865 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.859920 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.859931 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.859955 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.859966 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:38Z","lastTransitionTime":"2025-12-05T01:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.929362 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.929573 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.929653 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:09:38 crc kubenswrapper[4990]: E1205 01:09:38.929643 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:09:38 crc kubenswrapper[4990]: E1205 01:09:38.929806 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:09:38 crc kubenswrapper[4990]: E1205 01:09:38.929906 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.962191 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.962251 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.962268 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.962287 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:38 crc kubenswrapper[4990]: I1205 01:09:38.962301 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:38Z","lastTransitionTime":"2025-12-05T01:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.064868 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.064900 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.064910 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.064925 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.064934 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:39Z","lastTransitionTime":"2025-12-05T01:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.167656 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.167712 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.167724 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.167741 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.167753 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:39Z","lastTransitionTime":"2025-12-05T01:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.270981 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.271023 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.271037 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.271058 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.271073 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:39Z","lastTransitionTime":"2025-12-05T01:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.374869 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.374919 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.374934 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.374957 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.374972 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:39Z","lastTransitionTime":"2025-12-05T01:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.423094 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4w6g9_3eeec70d-1c5c-434e-90bc-95620458151c/ovnkube-controller/3.log" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.423775 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4w6g9_3eeec70d-1c5c-434e-90bc-95620458151c/ovnkube-controller/2.log" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.426983 4990 generic.go:334] "Generic (PLEG): container finished" podID="3eeec70d-1c5c-434e-90bc-95620458151c" containerID="ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b" exitCode=1 Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.427032 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" event={"ID":"3eeec70d-1c5c-434e-90bc-95620458151c","Type":"ContainerDied","Data":"ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b"} Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.427088 4990 scope.go:117] "RemoveContainer" containerID="eb074f137b0178e371feb06c41b7d35c06495f909d11e2388ff1528b3933b11f" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.427993 4990 scope.go:117] "RemoveContainer" containerID="ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b" Dec 05 01:09:39 crc kubenswrapper[4990]: E1205 01:09:39.428207 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-4w6g9_openshift-ovn-kubernetes(3eeec70d-1c5c-434e-90bc-95620458151c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.457804 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae199f42-bfab-4367-aadd-54f3ab99b342\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd1e00c990d5f61ca755a13e8fb3a9e841975edc5dea3e2a51f715d2556c1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f954071f194ae52b5b005d748ce92ac2507ac58868aa9fadcf9afcf9b9d8f71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5abd03392e388089cf716a7ea2eea41895e742cd173a3b217bbbd555e62c237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eac40a101194d00ec5ee1c7595a4d0baecbf61dda5ae671e03df521f2397a22c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eac40a101194d00ec5ee1c7595a4d0baecbf61dda5ae671e03df521f2397a22c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:39Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.477613 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.477656 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.477669 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.477686 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.477699 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:39Z","lastTransitionTime":"2025-12-05T01:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.500865 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:39Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.525588 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:39Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.539083 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:39Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.551807 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:39Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.567226 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:39Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.580550 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.580595 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.580609 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.580632 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.580653 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:39Z","lastTransitionTime":"2025-12-05T01:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.584184 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:39Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.599689 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:39Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.619039 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:39Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.635789 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8617140c-972f-4ec0-b814-350305fff19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd7abb19014a2e2eb714140cf0544b6dda5e8e729c0762950b39733a53a8981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5313c74541ba3388246397e45fa492ca200b8897142d8a648b0c34c7c576a559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pss5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:39Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.653554 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:39Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.677579 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531a3fe35c53fc02d57c85ec09e66ca43962c444bf7a59abb676020240ed91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:39Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.683927 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.684009 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.684036 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.684078 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.684107 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:39Z","lastTransitionTime":"2025-12-05T01:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.693085 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bxb6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7760172e-33aa-4de9-bd10-6a92c0851c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqpsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqpsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bxb6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:39Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.707287 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a14f02-7a4f-422f-a8c6-c611162c1bd3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407f3b7963c007e3baf021afe67f7c9836422245e3a9e89a2277ec1d98ff27d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1a86cd848696e6b8d7296270d5d8b58ffca056fe95db283a4f494c87684ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5844498d95e908f579a1ffcd6d0ba838c470c12cad2dfd89e8d4df5f7931cfbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6afe31db8331e521d8c92b070693d2997a7a26483bf351586b38ebb5869b53b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:39Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.723071 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb934aa0cb867865c3cc63541e39eaa488349656fdbb8df851d66001a971602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T01:09:30Z\\\",\\\"message\\\":\\\"2025-12-05T01:08:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3bee9049-22eb-4bcb-b87f-20a3e9c162ee\\\\n2025-12-05T01:08:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3bee9049-22eb-4bcb-b87f-20a3e9c162ee to /host/opt/cni/bin/\\\\n2025-12-05T01:08:45Z [verbose] multus-daemon started\\\\n2025-12-05T01:08:45Z [verbose] Readiness Indicator file check\\\\n2025-12-05T01:09:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:39Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.744047 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb074f137b0178e371feb06c41b7d35c06495f909d11e2388ff1528b3933b11f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T01:09:12Z\\\",\\\"message\\\":\\\"1:09:12.948727 6579 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 01:09:12.949116 6579 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 01:09:12.949284 6579 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 01:09:12.949455 6579 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1205 01:09:12.949992 6579 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 01:09:12.950108 6579 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 01:09:12.950184 6579 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1205 01:09:12.950190 6579 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1205 01:09:12.950213 6579 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1205 01:09:12.950272 6579 handler.go:208] Removed *v1.Node event handler 7\\\\nI1205 01:09:12.950282 6579 factory.go:656] Stopping watch factory\\\\nI1205 01:09:12.950300 6579 handler.go:208] Removed *v1.Node event handler 2\\\\nI1205 01:09:12.950310 6579 ovnkube.go:599] Stopped ovnkube\\\\nI1205 01:09:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:09:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T01:09:39Z\\\",\\\"message\\\":\\\"ble:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1205 01:09:38.979817 6882 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:38Z is after 2025-08-24T17:21:41Z]\\\\nI1205 01:09:38.979694 6882 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1205 01:09:38.979840 6882 obj_retry.go:409] Going to retry *v1.Pod resource setup for 17 objects: [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l openshift-dns/\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:39Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.761406 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vlg2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccea29b1-256e-417e-985e-ca477e0b8d7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5535c15a27a700aeea04a7cd4a8bec6709fe1de66fc4dc8e9edb9d7d00900ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vlg2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:39Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.787663 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.787724 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.787743 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.787771 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.787791 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:39Z","lastTransitionTime":"2025-12-05T01:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.891023 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.891085 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.891104 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.891134 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.891154 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:39Z","lastTransitionTime":"2025-12-05T01:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.930050 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:09:39 crc kubenswrapper[4990]: E1205 01:09:39.930531 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.946702 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.994880 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.994970 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.994991 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.995021 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:39 crc kubenswrapper[4990]: I1205 01:09:39.995040 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:39Z","lastTransitionTime":"2025-12-05T01:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.103118 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.103191 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.103223 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.103362 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.103465 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:40Z","lastTransitionTime":"2025-12-05T01:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.208086 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.208135 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.208146 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.208163 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.208175 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:40Z","lastTransitionTime":"2025-12-05T01:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.311270 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.311344 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.311364 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.311390 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.311409 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:40Z","lastTransitionTime":"2025-12-05T01:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.414160 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.414200 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.414211 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.414226 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.414236 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:40Z","lastTransitionTime":"2025-12-05T01:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.433745 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4w6g9_3eeec70d-1c5c-434e-90bc-95620458151c/ovnkube-controller/3.log" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.438279 4990 scope.go:117] "RemoveContainer" containerID="ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b" Dec 05 01:09:40 crc kubenswrapper[4990]: E1205 01:09:40.438525 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-4w6g9_openshift-ovn-kubernetes(3eeec70d-1c5c-434e-90bc-95620458151c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.461565 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:40Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.481335 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:40Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.495802 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:40Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.512246 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae199f42-bfab-4367-aadd-54f3ab99b342\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd1e00c990d5f61ca755a13e8fb3a9e841975edc5dea3e2a51f715d2556c1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f954071f194ae52b5b005d748ce92ac2507ac58868aa9fadcf9afcf9b9d8f71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5abd03392e388089cf716a7ea2eea41895e742cd173a3b217bbbd555e62c237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eac40a101194d00ec5ee1c7595a4d0baecbf61dda5ae671e03df521f2397a22c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eac40a101194d00ec5ee1c7595a4d0baecbf61dda5ae671e03df521f2397a22c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:40Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.517385 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.517426 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.517437 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.517453 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.517463 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:40Z","lastTransitionTime":"2025-12-05T01:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.528602 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:40Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.549407 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:40Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.568847 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:40Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.583063 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:40Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.605108 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:40Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.621093 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.621143 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.621155 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.621174 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.621186 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:40Z","lastTransitionTime":"2025-12-05T01:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.621149 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:40Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.644783 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531a3fe35c53fc02d57c85ec09e66ca43962c444bf7a59abb676020240ed91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:40Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.659945 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8617140c-972f-4ec0-b814-350305fff19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd7abb19014a2e2eb714140cf0544b6dda5e8e729c0762950b39733a53a8981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5313c74541ba3388246397e45fa492ca200b8897142d8a648b0c34c7c576a559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pss5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:40Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.672636 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473b4811-d951-4e31-ba0c-a4f090ab8973\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf0c2fdd19969720f93ba0e18489521ef2a13408e2b9cb3207a43d0258dada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768ff72b1ea934bc74cf79e63af47c25934013a54a255336bfcc59308cb7637f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768ff72b1ea934bc74cf79e63af47c25934013a54a255336bfcc59308cb7637f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:40Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.697688 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb934aa0cb867865c3cc63541e39eaa488349656fdbb8df851d66001a971602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T01:09:30Z\\\",\\\"message\\\":\\\"2025-12-05T01:08:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3bee9049-22eb-4bcb-b87f-20a3e9c162ee\\\\n2025-12-05T01:08:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3bee9049-22eb-4bcb-b87f-20a3e9c162ee to /host/opt/cni/bin/\\\\n2025-12-05T01:08:45Z [verbose] multus-daemon started\\\\n2025-12-05T01:08:45Z [verbose] Readiness Indicator file check\\\\n2025-12-05T01:09:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:40Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.724436 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.724574 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T01:09:39Z\\\",\\\"message\\\":\\\"ble:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1205 01:09:38.979817 6882 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:38Z is after 2025-08-24T17:21:41Z]\\\\nI1205 01:09:38.979694 6882 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1205 01:09:38.979840 6882 obj_retry.go:409] Going to retry *v1.Pod resource setup for 17 objects: [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l openshift-dns/\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:09:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-4w6g9_openshift-ovn-kubernetes(3eeec70d-1c5c-434e-90bc-95620458151c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:40Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.724824 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.724936 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.724953 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.724964 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:40Z","lastTransitionTime":"2025-12-05T01:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.737653 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vlg2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccea29b1-256e-417e-985e-ca477e0b8d7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5535c15a27a700aeea04a7cd4a8bec6709fe1de66fc4dc8e9edb9d7d00900ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vlg2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:40Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.750262 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bxb6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7760172e-33aa-4de9-bd10-6a92c0851c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqpsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqpsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bxb6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:40Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.763108 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a14f02-7a4f-422f-a8c6-c611162c1bd3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407f3b7963c007e3baf021afe67f7c9836422245e3a9e89a2277ec1d98ff27d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1a86cd848696e6b8d7296270d5d8b58ffca056fe95db283a4f494c87684ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5844498d95e908f579a1ffcd6d0ba838c470c12cad2dfd89e8d4df5f7931cfbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6afe31db8331e521d8c92b070693d2997a7a26483bf351586b38ebb5869b53b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:40Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.827416 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.827459 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.827471 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.827506 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.827521 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:40Z","lastTransitionTime":"2025-12-05T01:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.929294 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.929353 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.929398 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:09:40 crc kubenswrapper[4990]: E1205 01:09:40.929398 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:09:40 crc kubenswrapper[4990]: E1205 01:09:40.929507 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:09:40 crc kubenswrapper[4990]: E1205 01:09:40.929551 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.929819 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.929852 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.929865 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.929882 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:40 crc kubenswrapper[4990]: I1205 01:09:40.929893 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:40Z","lastTransitionTime":"2025-12-05T01:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.032419 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.032462 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.032471 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.032500 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.032513 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:41Z","lastTransitionTime":"2025-12-05T01:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.135536 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.135586 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.135598 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.135619 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.135632 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:41Z","lastTransitionTime":"2025-12-05T01:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.239104 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.239171 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.239190 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.239218 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.239235 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:41Z","lastTransitionTime":"2025-12-05T01:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.342865 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.342930 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.342945 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.342965 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.342980 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:41Z","lastTransitionTime":"2025-12-05T01:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.446045 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.446120 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.446139 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.446165 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.446179 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:41Z","lastTransitionTime":"2025-12-05T01:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.549881 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.549945 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.549959 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.549978 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.549991 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:41Z","lastTransitionTime":"2025-12-05T01:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.653183 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.653690 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.653822 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.653941 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.654031 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:41Z","lastTransitionTime":"2025-12-05T01:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.758148 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.758220 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.758239 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.758271 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.758293 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:41Z","lastTransitionTime":"2025-12-05T01:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.861626 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.861698 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.861726 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.861774 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.861800 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:41Z","lastTransitionTime":"2025-12-05T01:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.930415 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:09:41 crc kubenswrapper[4990]: E1205 01:09:41.930671 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.953761 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8fd7b2d784cfbe8d9ea9282863e9259309dc62604c3a7d7d9f376b1dd3c89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:41Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.965009 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.965131 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.965152 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.965186 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.965215 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:41Z","lastTransitionTime":"2025-12-05T01:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:41 crc kubenswrapper[4990]: I1205 01:09:41.982717 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"644fbc14-61e3-4544-b42b-da32f942c0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531a3fe35c53fc02d57c85ec09e66ca43962c444bf7a59abb676020240ed91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23e52e94431d6b40e17c873e4b1e114e35defb2573cb8543f0489479645b50f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bfc822f4753641b15022ab800fbc06defdf2866b11618e62785ae046ba796e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ec55f18e98135f2e607681b23d5d60b61c1693e06f33959e27b104c62ef4d7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77d519f9280ff4a34e205ddab68df000db43e165cc62c3a9fe58f20a147a5a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692a48abd86261069025f49bddbbd8eae998eaecb4b7a5fe350ea652c2300200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0024bf40712de1bc2b41b801370dfac5d3f450e972400a430c4044aa7cde53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-htx8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f6zb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:41Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.004285 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8617140c-972f-4ec0-b814-350305fff19f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd7abb19014a2e2eb714140cf0544b6dda5e8e729c0762950b39733a53a8981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5313c74541ba3388246397e45fa492ca200b8897142d8a648b0c34c7c576a559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9brj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pss5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.018412 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"473b4811-d951-4e31-ba0c-a4f090ab8973\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf0c2fdd19969720f93ba0e18489521ef2a13408e2b9cb3207a43d0258dada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768ff72b1ea934bc74cf79e63af47c25934013a54a255336bfcc59308cb7637f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768ff72b1ea934bc74cf79e63af47c25934013a54a255336bfcc59308cb7637f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.034111 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rdhk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4914133-b0cd-4d12-84d5-c99379e2324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb934aa0cb867865c3cc63541e39eaa488349656fdbb8df851d66001a971602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T01:09:30Z\\\",\\\"message\\\":\\\"2025-12-05T01:08:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3bee9049-22eb-4bcb-b87f-20a3e9c162ee\\\\n2025-12-05T01:08:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3bee9049-22eb-4bcb-b87f-20a3e9c162ee to /host/opt/cni/bin/\\\\n2025-12-05T01:08:45Z [verbose] multus-daemon started\\\\n2025-12-05T01:08:45Z [verbose] Readiness Indicator file check\\\\n2025-12-05T01:09:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45jp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rdhk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.058637 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeec70d-1c5c-434e-90bc-95620458151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T01:09:39Z\\\",\\\"message\\\":\\\"ble:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1205 01:09:38.979817 6882 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:38Z is after 2025-08-24T17:21:41Z]\\\\nI1205 01:09:38.979694 6882 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1205 01:09:38.979840 6882 obj_retry.go:409] Going to retry *v1.Pod resource setup for 17 objects: [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l openshift-dns/\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:09:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-4w6g9_openshift-ovn-kubernetes(3eeec70d-1c5c-434e-90bc-95620458151c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8ffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4w6g9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.068593 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.068656 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.068668 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.068708 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.068719 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:42Z","lastTransitionTime":"2025-12-05T01:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.072048 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vlg2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccea29b1-256e-417e-985e-ca477e0b8d7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5535c15a27a700aeea04a7cd4a8bec6709fe1de66fc4dc8e9edb9d7d00900ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vlg2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.086097 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bxb6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7760172e-33aa-4de9-bd10-6a92c0851c6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqpsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqpsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bxb6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.101370 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a14f02-7a4f-422f-a8c6-c611162c1bd3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407f3b7963c007e3baf021afe67f7c9836422245e3a9e89a2277ec1d98ff27d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1a86cd848696e6b8d7296270d5d8b58ffca056fe95db283a4f494c87684ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5844498d95e908f579a1ffcd6d0ba838c470c12cad2dfd89e8d4df5f7931cfbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6afe31db8331e521d8c92b070693d2997a7a26483bf351586b38ebb5869b53b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.115674 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceb0742ee304b5ab9e933928571c99d7b115ffd0c66bf1fa4d8b49a8f94f12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.128044 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.140810 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f072df2-6ddf-4707-8852-a60655293cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e613da377c579e80511a6f80f054bfde63e22c2015303d5261844a5cc647c338\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7kcz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.152736 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae199f42-bfab-4367-aadd-54f3ab99b342\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:09:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cd1e00c990d5f61ca755a13e8fb3a9e841975edc5dea3e2a51f715d2556c1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f954071f194ae52b5b005d748ce92ac2507ac58868aa9fadcf9afcf9b9d8f71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5abd03392e388089cf716a7ea2eea41895e742cd173a3b217bbbd555e62c237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eac40a101194d00ec5ee1c7595a4d0baecbf61dda5ae671e03df521f2397a22c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eac40a101194d00ec5ee1c7595a4d0baecbf61dda5ae671e03df521f2397a22c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.166205 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.170948 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.171008 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.171023 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.171045 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.171057 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:42Z","lastTransitionTime":"2025-12-05T01:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.180241 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903d06de782cb093a57c7eddce4f960b2d12cd14b0643b985fc31ce738ef7f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4046f238ad566606abfa0d4c7f7265a51a455bd824d11be3a43af99f67a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.192847 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.207703 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6580a04-67de-48f9-9da2-56cb4377af48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712fb58c9b4532664c01ad15dfc44a9328183bcfbbe1b5d800cc316c14052fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6c262\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zxlh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.225854 4990 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0aa465e0-0df4-4883-b893-6244a198c6c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T01:08:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 01:08:34.280291 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 01:08:34.281649 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2597399250/tls.crt::/tmp/serving-cert-2597399250/tls.key\\\\\\\"\\\\nI1205 01:08:40.510761 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 01:08:40.517876 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 01:08:40.517915 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 01:08:40.517958 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 01:08:40.517969 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 01:08:40.528564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 01:08:40.528611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528620 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 01:08:40.528628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 01:08:40.528634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 01:08:40.528640 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 01:08:40.528644 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 01:08:40.528936 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 01:08:40.537554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T01:08:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T01:08:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T01:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T01:08:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T01:09:42Z is after 2025-08-24T17:21:41Z" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.273923 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.273979 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.273991 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.274010 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.274021 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:42Z","lastTransitionTime":"2025-12-05T01:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.377119 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.377176 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.377187 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.377202 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.377213 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:42Z","lastTransitionTime":"2025-12-05T01:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.480289 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.480351 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.480374 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.480408 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.480430 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:42Z","lastTransitionTime":"2025-12-05T01:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.583454 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.583603 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.583626 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.583650 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.583670 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:42Z","lastTransitionTime":"2025-12-05T01:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.685983 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.686030 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.686038 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.686053 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.686061 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:42Z","lastTransitionTime":"2025-12-05T01:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.788557 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.788593 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.788601 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.788615 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.788626 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:42Z","lastTransitionTime":"2025-12-05T01:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.890729 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.890768 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.890777 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.890794 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.890803 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:42Z","lastTransitionTime":"2025-12-05T01:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.930077 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.930104 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.930077 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:09:42 crc kubenswrapper[4990]: E1205 01:09:42.930222 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:09:42 crc kubenswrapper[4990]: E1205 01:09:42.930273 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:09:42 crc kubenswrapper[4990]: E1205 01:09:42.930326 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.992836 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.992878 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.992887 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.992903 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:42 crc kubenswrapper[4990]: I1205 01:09:42.992914 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:42Z","lastTransitionTime":"2025-12-05T01:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.095377 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.095413 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.095422 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.095435 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.095443 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:43Z","lastTransitionTime":"2025-12-05T01:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.198324 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.198368 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.198379 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.198396 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.198473 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:43Z","lastTransitionTime":"2025-12-05T01:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.301410 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.301451 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.301460 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.301493 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.301507 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:43Z","lastTransitionTime":"2025-12-05T01:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.404419 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.404466 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.404491 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.404510 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.404524 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:43Z","lastTransitionTime":"2025-12-05T01:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.507220 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.507305 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.507319 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.507339 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.507354 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:43Z","lastTransitionTime":"2025-12-05T01:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.610146 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.610186 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.610200 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.610217 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.610229 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:43Z","lastTransitionTime":"2025-12-05T01:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.712560 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.712604 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.712616 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.712633 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.712644 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:43Z","lastTransitionTime":"2025-12-05T01:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.815543 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.815601 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.815624 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.815642 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.815654 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:43Z","lastTransitionTime":"2025-12-05T01:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.919320 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.919366 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.919376 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.919411 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.919422 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:43Z","lastTransitionTime":"2025-12-05T01:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:43 crc kubenswrapper[4990]: I1205 01:09:43.930035 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:09:43 crc kubenswrapper[4990]: E1205 01:09:43.930174 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.023292 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.023378 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.023402 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.023520 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.023549 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:44Z","lastTransitionTime":"2025-12-05T01:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.126002 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.126050 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.126061 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.126081 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.126094 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:44Z","lastTransitionTime":"2025-12-05T01:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.229116 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.229186 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.229204 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.229237 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.229261 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:44Z","lastTransitionTime":"2025-12-05T01:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.332734 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.332794 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.332809 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.332829 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.332840 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:44Z","lastTransitionTime":"2025-12-05T01:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.435824 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.435870 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.435880 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.435898 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.435909 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:44Z","lastTransitionTime":"2025-12-05T01:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.537884 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.537961 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.537972 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.537988 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.537999 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:44Z","lastTransitionTime":"2025-12-05T01:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.640604 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.640660 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.640677 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.640703 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.640716 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:44Z","lastTransitionTime":"2025-12-05T01:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.743926 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.744354 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.744379 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.744414 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.744443 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:44Z","lastTransitionTime":"2025-12-05T01:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.848243 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.848292 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.848302 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.848322 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.848334 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:44Z","lastTransitionTime":"2025-12-05T01:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.853726 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:09:44 crc kubenswrapper[4990]: E1205 01:09:44.853840 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:48.853816093 +0000 UTC m=+147.230031454 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.853938 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.854012 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:09:44 crc kubenswrapper[4990]: E1205 01:09:44.854096 4990 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 01:09:44 crc kubenswrapper[4990]: E1205 01:09:44.854145 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 01:10:48.854138022 +0000 UTC m=+147.230353383 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 01:09:44 crc kubenswrapper[4990]: E1205 01:09:44.854097 4990 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 01:09:44 crc kubenswrapper[4990]: E1205 01:09:44.854243 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 01:10:48.854231715 +0000 UTC m=+147.230447086 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.929959 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.930041 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.929959 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:09:44 crc kubenswrapper[4990]: E1205 01:09:44.930113 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:09:44 crc kubenswrapper[4990]: E1205 01:09:44.930238 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:09:44 crc kubenswrapper[4990]: E1205 01:09:44.930353 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.951407 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.951462 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.951518 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.951557 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.951581 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:44Z","lastTransitionTime":"2025-12-05T01:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.955210 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:09:44 crc kubenswrapper[4990]: I1205 01:09:44.955284 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:09:44 crc kubenswrapper[4990]: E1205 01:09:44.955511 4990 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 01:09:44 crc kubenswrapper[4990]: E1205 01:09:44.955567 4990 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 01:09:44 crc kubenswrapper[4990]: E1205 01:09:44.955590 4990 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 01:09:44 crc kubenswrapper[4990]: E1205 01:09:44.955525 4990 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 01:09:44 crc kubenswrapper[4990]: E1205 01:09:44.955661 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 01:10:48.955634431 +0000 UTC m=+147.331849822 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 01:09:44 crc kubenswrapper[4990]: E1205 01:09:44.955675 4990 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 01:09:44 crc kubenswrapper[4990]: E1205 01:09:44.955692 4990 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 01:09:44 crc kubenswrapper[4990]: E1205 01:09:44.955795 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 01:10:48.955775215 +0000 UTC m=+147.331990596 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.054903 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.054982 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.055002 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.055029 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.055051 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:45Z","lastTransitionTime":"2025-12-05T01:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.158378 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.158436 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.158449 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.158474 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.158505 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:45Z","lastTransitionTime":"2025-12-05T01:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.261285 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.261332 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.261345 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.261365 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.261378 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:45Z","lastTransitionTime":"2025-12-05T01:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.363847 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.363942 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.363955 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.363976 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.363990 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:45Z","lastTransitionTime":"2025-12-05T01:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.466217 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.466315 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.466333 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.466356 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.466370 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:45Z","lastTransitionTime":"2025-12-05T01:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.570120 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.570205 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.570225 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.570256 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.570276 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:45Z","lastTransitionTime":"2025-12-05T01:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.672704 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.672751 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.672762 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.672781 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.672793 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:45Z","lastTransitionTime":"2025-12-05T01:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.775548 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.775591 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.775605 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.775625 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.775640 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:45Z","lastTransitionTime":"2025-12-05T01:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.878707 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.878791 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.878816 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.878849 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.878873 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:45Z","lastTransitionTime":"2025-12-05T01:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.929450 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:09:45 crc kubenswrapper[4990]: E1205 01:09:45.929702 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.982246 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.982302 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.982316 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.982339 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:45 crc kubenswrapper[4990]: I1205 01:09:45.982357 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:45Z","lastTransitionTime":"2025-12-05T01:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.085286 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.085355 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.085378 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.085408 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.085434 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:46Z","lastTransitionTime":"2025-12-05T01:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.188727 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.188786 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.188806 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.188838 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.188864 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:46Z","lastTransitionTime":"2025-12-05T01:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.292791 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.292874 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.292897 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.292928 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.292947 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:46Z","lastTransitionTime":"2025-12-05T01:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.396211 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.397028 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.397053 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.397093 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.397120 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:46Z","lastTransitionTime":"2025-12-05T01:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.500525 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.500605 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.500628 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.500664 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.500690 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:46Z","lastTransitionTime":"2025-12-05T01:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.603937 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.603987 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.604000 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.604019 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.604070 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:46Z","lastTransitionTime":"2025-12-05T01:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.707343 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.707437 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.707463 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.707585 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.707643 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:46Z","lastTransitionTime":"2025-12-05T01:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.782149 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.782215 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.782235 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.782265 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.782285 4990 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T01:09:46Z","lastTransitionTime":"2025-12-05T01:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.849982 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-v4ph7"] Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.850498 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v4ph7" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.853503 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.853699 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.854228 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.854742 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.874157 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b3f252dd-d372-473f-8346-dbd7b5ba10d4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-v4ph7\" (UID: \"b3f252dd-d372-473f-8346-dbd7b5ba10d4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v4ph7" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.874241 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3f252dd-d372-473f-8346-dbd7b5ba10d4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-v4ph7\" (UID: \"b3f252dd-d372-473f-8346-dbd7b5ba10d4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v4ph7" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.874330 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3f252dd-d372-473f-8346-dbd7b5ba10d4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-v4ph7\" (UID: \"b3f252dd-d372-473f-8346-dbd7b5ba10d4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v4ph7" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.874379 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b3f252dd-d372-473f-8346-dbd7b5ba10d4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-v4ph7\" (UID: \"b3f252dd-d372-473f-8346-dbd7b5ba10d4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v4ph7" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.874530 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b3f252dd-d372-473f-8346-dbd7b5ba10d4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-v4ph7\" (UID: \"b3f252dd-d372-473f-8346-dbd7b5ba10d4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v4ph7" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.916023 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-rdhk7" podStartSLOduration=65.915997672 podStartE2EDuration="1m5.915997672s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:09:46.885170156 +0000 UTC m=+85.261385527" watchObservedRunningTime="2025-12-05 01:09:46.915997672 +0000 UTC m=+85.292213043" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.930313 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.930352 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.930403 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:09:46 crc kubenswrapper[4990]: E1205 01:09:46.930462 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:09:46 crc kubenswrapper[4990]: E1205 01:09:46.930638 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:09:46 crc kubenswrapper[4990]: E1205 01:09:46.930714 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.945454 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-vlg2t" podStartSLOduration=65.945428526 podStartE2EDuration="1m5.945428526s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:09:46.931812587 +0000 UTC m=+85.308027978" watchObservedRunningTime="2025-12-05 01:09:46.945428526 +0000 UTC m=+85.321643897" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.965764 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=60.965738976 podStartE2EDuration="1m0.965738976s" podCreationTimestamp="2025-12-05 01:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:09:46.965687704 +0000 UTC m=+85.341903085" watchObservedRunningTime="2025-12-05 01:09:46.965738976 +0000 UTC m=+85.341954337" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.975769 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b3f252dd-d372-473f-8346-dbd7b5ba10d4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-v4ph7\" (UID: \"b3f252dd-d372-473f-8346-dbd7b5ba10d4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v4ph7" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.975842 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b3f252dd-d372-473f-8346-dbd7b5ba10d4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-v4ph7\" (UID: \"b3f252dd-d372-473f-8346-dbd7b5ba10d4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v4ph7" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.975898 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3f252dd-d372-473f-8346-dbd7b5ba10d4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-v4ph7\" (UID: \"b3f252dd-d372-473f-8346-dbd7b5ba10d4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v4ph7" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.975923 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3f252dd-d372-473f-8346-dbd7b5ba10d4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-v4ph7\" (UID: \"b3f252dd-d372-473f-8346-dbd7b5ba10d4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v4ph7" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.975960 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b3f252dd-d372-473f-8346-dbd7b5ba10d4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-v4ph7\" (UID: \"b3f252dd-d372-473f-8346-dbd7b5ba10d4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v4ph7" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.975960 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b3f252dd-d372-473f-8346-dbd7b5ba10d4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-v4ph7\" (UID: \"b3f252dd-d372-473f-8346-dbd7b5ba10d4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v4ph7" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.976033 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b3f252dd-d372-473f-8346-dbd7b5ba10d4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-v4ph7\" (UID: \"b3f252dd-d372-473f-8346-dbd7b5ba10d4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v4ph7" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.977203 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b3f252dd-d372-473f-8346-dbd7b5ba10d4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-v4ph7\" (UID: \"b3f252dd-d372-473f-8346-dbd7b5ba10d4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v4ph7" Dec 05 01:09:46 crc kubenswrapper[4990]: I1205 01:09:46.983720 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3f252dd-d372-473f-8346-dbd7b5ba10d4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-v4ph7\" (UID: \"b3f252dd-d372-473f-8346-dbd7b5ba10d4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v4ph7" Dec 05 01:09:47 crc kubenswrapper[4990]: I1205 01:09:47.000508 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3f252dd-d372-473f-8346-dbd7b5ba10d4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-v4ph7\" (UID: \"b3f252dd-d372-473f-8346-dbd7b5ba10d4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v4ph7" Dec 05 01:09:47 crc kubenswrapper[4990]: I1205 01:09:47.013640 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-wb424" podStartSLOduration=66.013611244 podStartE2EDuration="1m6.013611244s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:09:47.013546302 +0000 UTC m=+85.389761683" watchObservedRunningTime="2025-12-05 01:09:47.013611244 +0000 UTC m=+85.389826615" Dec 05 01:09:47 crc kubenswrapper[4990]: I1205 01:09:47.045948 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=33.045921134 podStartE2EDuration="33.045921134s" podCreationTimestamp="2025-12-05 01:09:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:09:47.045512172 +0000 UTC m=+85.421727553" watchObservedRunningTime="2025-12-05 01:09:47.045921134 +0000 UTC m=+85.422136495" Dec 05 01:09:47 crc kubenswrapper[4990]: I1205 01:09:47.102794 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podStartSLOduration=66.102773372 podStartE2EDuration="1m6.102773372s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:09:47.102297187 +0000 UTC m=+85.478512558" watchObservedRunningTime="2025-12-05 01:09:47.102773372 +0000 UTC m=+85.478988733" Dec 05 01:09:47 crc kubenswrapper[4990]: I1205 01:09:47.118404 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=66.11837851 podStartE2EDuration="1m6.11837851s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:09:47.117396931 +0000 UTC m=+85.493612292" watchObservedRunningTime="2025-12-05 01:09:47.11837851 +0000 UTC m=+85.494593891" Dec 05 01:09:47 crc kubenswrapper[4990]: I1205 01:09:47.151500 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-f6zb4" podStartSLOduration=66.151463964 podStartE2EDuration="1m6.151463964s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:09:47.150928268 +0000 UTC m=+85.527143639" watchObservedRunningTime="2025-12-05 01:09:47.151463964 +0000 UTC m=+85.527679325" Dec 05 01:09:47 crc kubenswrapper[4990]: I1205 01:09:47.170467 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v4ph7" Dec 05 01:09:47 crc kubenswrapper[4990]: I1205 01:09:47.178888 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pss5l" podStartSLOduration=65.178858637 podStartE2EDuration="1m5.178858637s" podCreationTimestamp="2025-12-05 01:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:09:47.16532688 +0000 UTC m=+85.541542242" watchObservedRunningTime="2025-12-05 01:09:47.178858637 +0000 UTC m=+85.555074008" Dec 05 01:09:47 crc kubenswrapper[4990]: W1205 01:09:47.188185 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3f252dd_d372_473f_8346_dbd7b5ba10d4.slice/crio-405d0b3d980cb1f2a3c5bc422675814fb7bdbba9e485d333cddd662f9b7fa84b WatchSource:0}: Error finding container 405d0b3d980cb1f2a3c5bc422675814fb7bdbba9e485d333cddd662f9b7fa84b: Status 404 returned error can't find the container with id 405d0b3d980cb1f2a3c5bc422675814fb7bdbba9e485d333cddd662f9b7fa84b Dec 05 01:09:47 crc kubenswrapper[4990]: I1205 01:09:47.463170 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v4ph7" event={"ID":"b3f252dd-d372-473f-8346-dbd7b5ba10d4","Type":"ContainerStarted","Data":"b6f4acf8c83bcaf3ca922ca75ee7fd1f6844e65df920cf1c6af3e1b34680e87a"} Dec 05 01:09:47 crc kubenswrapper[4990]: I1205 01:09:47.463527 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v4ph7" event={"ID":"b3f252dd-d372-473f-8346-dbd7b5ba10d4","Type":"ContainerStarted","Data":"405d0b3d980cb1f2a3c5bc422675814fb7bdbba9e485d333cddd662f9b7fa84b"} Dec 05 01:09:47 crc kubenswrapper[4990]: I1205 01:09:47.480204 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=8.480176687 podStartE2EDuration="8.480176687s" podCreationTimestamp="2025-12-05 01:09:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:09:47.178854037 +0000 UTC m=+85.555069398" watchObservedRunningTime="2025-12-05 01:09:47.480176687 +0000 UTC m=+85.856392048" Dec 05 01:09:47 crc kubenswrapper[4990]: I1205 01:09:47.930199 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:09:47 crc kubenswrapper[4990]: E1205 01:09:47.931069 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:09:48 crc kubenswrapper[4990]: I1205 01:09:48.929917 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:09:48 crc kubenswrapper[4990]: I1205 01:09:48.930007 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:09:48 crc kubenswrapper[4990]: I1205 01:09:48.930041 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:09:48 crc kubenswrapper[4990]: E1205 01:09:48.930113 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:09:48 crc kubenswrapper[4990]: E1205 01:09:48.930201 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:09:48 crc kubenswrapper[4990]: E1205 01:09:48.930341 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:09:49 crc kubenswrapper[4990]: I1205 01:09:49.929863 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:09:49 crc kubenswrapper[4990]: E1205 01:09:49.930191 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:09:49 crc kubenswrapper[4990]: I1205 01:09:49.945411 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v4ph7" podStartSLOduration=68.945386823 podStartE2EDuration="1m8.945386823s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:09:47.479874288 +0000 UTC m=+85.856089659" watchObservedRunningTime="2025-12-05 01:09:49.945386823 +0000 UTC m=+88.321602194" Dec 05 01:09:49 crc kubenswrapper[4990]: I1205 01:09:49.946911 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 05 01:09:50 crc kubenswrapper[4990]: I1205 01:09:50.929861 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:09:50 crc kubenswrapper[4990]: I1205 01:09:50.929865 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:09:50 crc kubenswrapper[4990]: E1205 01:09:50.930014 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:09:50 crc kubenswrapper[4990]: E1205 01:09:50.930099 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:09:50 crc kubenswrapper[4990]: I1205 01:09:50.929888 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:09:50 crc kubenswrapper[4990]: E1205 01:09:50.930174 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:09:51 crc kubenswrapper[4990]: I1205 01:09:51.929832 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:09:51 crc kubenswrapper[4990]: E1205 01:09:51.931432 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:09:52 crc kubenswrapper[4990]: I1205 01:09:52.929597 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:09:52 crc kubenswrapper[4990]: I1205 01:09:52.929619 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:09:52 crc kubenswrapper[4990]: E1205 01:09:52.929726 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:09:52 crc kubenswrapper[4990]: I1205 01:09:52.929960 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:09:52 crc kubenswrapper[4990]: E1205 01:09:52.930026 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:09:52 crc kubenswrapper[4990]: E1205 01:09:52.930154 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:09:53 crc kubenswrapper[4990]: I1205 01:09:53.929669 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:09:53 crc kubenswrapper[4990]: E1205 01:09:53.929847 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:09:53 crc kubenswrapper[4990]: I1205 01:09:53.930786 4990 scope.go:117] "RemoveContainer" containerID="ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b" Dec 05 01:09:53 crc kubenswrapper[4990]: E1205 01:09:53.930994 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-4w6g9_openshift-ovn-kubernetes(3eeec70d-1c5c-434e-90bc-95620458151c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" Dec 05 01:09:54 crc kubenswrapper[4990]: I1205 01:09:54.929964 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:09:54 crc kubenswrapper[4990]: I1205 01:09:54.929964 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:09:54 crc kubenswrapper[4990]: I1205 01:09:54.930122 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:09:54 crc kubenswrapper[4990]: E1205 01:09:54.930244 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:09:54 crc kubenswrapper[4990]: E1205 01:09:54.930662 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:09:54 crc kubenswrapper[4990]: E1205 01:09:54.930818 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:09:55 crc kubenswrapper[4990]: I1205 01:09:55.929734 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:09:55 crc kubenswrapper[4990]: E1205 01:09:55.929923 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:09:56 crc kubenswrapper[4990]: I1205 01:09:56.930031 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:09:56 crc kubenswrapper[4990]: I1205 01:09:56.930116 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:09:56 crc kubenswrapper[4990]: I1205 01:09:56.930142 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:09:56 crc kubenswrapper[4990]: E1205 01:09:56.930665 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:09:56 crc kubenswrapper[4990]: E1205 01:09:56.930936 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:09:56 crc kubenswrapper[4990]: E1205 01:09:56.930831 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:09:57 crc kubenswrapper[4990]: I1205 01:09:57.930213 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:09:57 crc kubenswrapper[4990]: E1205 01:09:57.930387 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:09:58 crc kubenswrapper[4990]: I1205 01:09:58.929751 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:09:58 crc kubenswrapper[4990]: I1205 01:09:58.929751 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:09:58 crc kubenswrapper[4990]: E1205 01:09:58.930477 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:09:58 crc kubenswrapper[4990]: I1205 01:09:58.929837 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:09:58 crc kubenswrapper[4990]: E1205 01:09:58.930729 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:09:58 crc kubenswrapper[4990]: E1205 01:09:58.931287 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:09:59 crc kubenswrapper[4990]: I1205 01:09:59.929614 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:09:59 crc kubenswrapper[4990]: E1205 01:09:59.929811 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:10:00 crc kubenswrapper[4990]: I1205 01:10:00.197670 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7760172e-33aa-4de9-bd10-6a92c0851c6e-metrics-certs\") pod \"network-metrics-daemon-bxb6s\" (UID: \"7760172e-33aa-4de9-bd10-6a92c0851c6e\") " pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:10:00 crc kubenswrapper[4990]: E1205 01:10:00.197921 4990 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 01:10:00 crc kubenswrapper[4990]: E1205 01:10:00.198071 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7760172e-33aa-4de9-bd10-6a92c0851c6e-metrics-certs podName:7760172e-33aa-4de9-bd10-6a92c0851c6e nodeName:}" failed. No retries permitted until 2025-12-05 01:11:04.19803625 +0000 UTC m=+162.574251791 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7760172e-33aa-4de9-bd10-6a92c0851c6e-metrics-certs") pod "network-metrics-daemon-bxb6s" (UID: "7760172e-33aa-4de9-bd10-6a92c0851c6e") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 01:10:00 crc kubenswrapper[4990]: I1205 01:10:00.930041 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:10:00 crc kubenswrapper[4990]: I1205 01:10:00.930104 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:10:00 crc kubenswrapper[4990]: I1205 01:10:00.930142 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:10:00 crc kubenswrapper[4990]: E1205 01:10:00.930312 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:10:00 crc kubenswrapper[4990]: E1205 01:10:00.930432 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:10:00 crc kubenswrapper[4990]: E1205 01:10:00.930607 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:10:01 crc kubenswrapper[4990]: I1205 01:10:01.938640 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:10:01 crc kubenswrapper[4990]: E1205 01:10:01.939866 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:10:02 crc kubenswrapper[4990]: I1205 01:10:02.930196 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:10:02 crc kubenswrapper[4990]: I1205 01:10:02.930301 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:10:02 crc kubenswrapper[4990]: E1205 01:10:02.930410 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:10:02 crc kubenswrapper[4990]: E1205 01:10:02.930598 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:10:02 crc kubenswrapper[4990]: I1205 01:10:02.930828 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:10:02 crc kubenswrapper[4990]: E1205 01:10:02.930924 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:10:03 crc kubenswrapper[4990]: I1205 01:10:03.930077 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:10:03 crc kubenswrapper[4990]: E1205 01:10:03.930313 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:10:04 crc kubenswrapper[4990]: I1205 01:10:04.929542 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:10:04 crc kubenswrapper[4990]: I1205 01:10:04.929588 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:10:04 crc kubenswrapper[4990]: I1205 01:10:04.929684 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:10:04 crc kubenswrapper[4990]: E1205 01:10:04.929778 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:10:04 crc kubenswrapper[4990]: E1205 01:10:04.929861 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:10:04 crc kubenswrapper[4990]: E1205 01:10:04.930006 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:10:05 crc kubenswrapper[4990]: I1205 01:10:05.929854 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:10:05 crc kubenswrapper[4990]: E1205 01:10:05.930052 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:10:06 crc kubenswrapper[4990]: I1205 01:10:06.930223 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:10:06 crc kubenswrapper[4990]: I1205 01:10:06.930330 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:10:06 crc kubenswrapper[4990]: E1205 01:10:06.930469 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:10:06 crc kubenswrapper[4990]: E1205 01:10:06.930579 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:10:06 crc kubenswrapper[4990]: I1205 01:10:06.931266 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:10:06 crc kubenswrapper[4990]: E1205 01:10:06.931346 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:10:06 crc kubenswrapper[4990]: I1205 01:10:06.932215 4990 scope.go:117] "RemoveContainer" containerID="ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b" Dec 05 01:10:06 crc kubenswrapper[4990]: E1205 01:10:06.932679 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-4w6g9_openshift-ovn-kubernetes(3eeec70d-1c5c-434e-90bc-95620458151c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" Dec 05 01:10:07 crc kubenswrapper[4990]: I1205 01:10:07.930088 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:10:07 crc kubenswrapper[4990]: E1205 01:10:07.930347 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:10:08 crc kubenswrapper[4990]: I1205 01:10:08.930056 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:10:08 crc kubenswrapper[4990]: I1205 01:10:08.930073 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:10:08 crc kubenswrapper[4990]: E1205 01:10:08.930319 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:10:08 crc kubenswrapper[4990]: I1205 01:10:08.930385 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:10:08 crc kubenswrapper[4990]: E1205 01:10:08.930568 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:10:08 crc kubenswrapper[4990]: E1205 01:10:08.930712 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:10:09 crc kubenswrapper[4990]: I1205 01:10:09.930119 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:10:09 crc kubenswrapper[4990]: E1205 01:10:09.930415 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:10:10 crc kubenswrapper[4990]: I1205 01:10:10.930625 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:10:10 crc kubenswrapper[4990]: I1205 01:10:10.930683 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:10:10 crc kubenswrapper[4990]: I1205 01:10:10.930809 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:10:10 crc kubenswrapper[4990]: E1205 01:10:10.930888 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:10:10 crc kubenswrapper[4990]: E1205 01:10:10.931437 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:10:10 crc kubenswrapper[4990]: E1205 01:10:10.931535 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:10:11 crc kubenswrapper[4990]: I1205 01:10:11.929564 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:10:11 crc kubenswrapper[4990]: E1205 01:10:11.931565 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:10:12 crc kubenswrapper[4990]: I1205 01:10:12.929679 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:10:12 crc kubenswrapper[4990]: I1205 01:10:12.929742 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:10:12 crc kubenswrapper[4990]: I1205 01:10:12.929704 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:10:12 crc kubenswrapper[4990]: E1205 01:10:12.930203 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:10:12 crc kubenswrapper[4990]: E1205 01:10:12.930372 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:10:12 crc kubenswrapper[4990]: E1205 01:10:12.930551 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:10:13 crc kubenswrapper[4990]: I1205 01:10:13.929623 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:10:13 crc kubenswrapper[4990]: E1205 01:10:13.929912 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:10:14 crc kubenswrapper[4990]: I1205 01:10:14.929526 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:10:14 crc kubenswrapper[4990]: I1205 01:10:14.929573 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:10:14 crc kubenswrapper[4990]: I1205 01:10:14.929589 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:10:14 crc kubenswrapper[4990]: E1205 01:10:14.930115 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:10:14 crc kubenswrapper[4990]: E1205 01:10:14.930245 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:10:14 crc kubenswrapper[4990]: E1205 01:10:14.930564 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:10:15 crc kubenswrapper[4990]: I1205 01:10:15.929778 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:10:15 crc kubenswrapper[4990]: E1205 01:10:15.929985 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:10:16 crc kubenswrapper[4990]: I1205 01:10:16.930364 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:10:16 crc kubenswrapper[4990]: I1205 01:10:16.930436 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:10:16 crc kubenswrapper[4990]: I1205 01:10:16.930375 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:10:16 crc kubenswrapper[4990]: E1205 01:10:16.930633 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:10:16 crc kubenswrapper[4990]: E1205 01:10:16.930719 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:10:16 crc kubenswrapper[4990]: E1205 01:10:16.930830 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:10:17 crc kubenswrapper[4990]: I1205 01:10:17.576127 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rdhk7_c4914133-b0cd-4d12-84d5-c99379e2324a/kube-multus/1.log" Dec 05 01:10:17 crc kubenswrapper[4990]: I1205 01:10:17.576946 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rdhk7_c4914133-b0cd-4d12-84d5-c99379e2324a/kube-multus/0.log" Dec 05 01:10:17 crc kubenswrapper[4990]: I1205 01:10:17.577043 4990 generic.go:334] "Generic (PLEG): container finished" podID="c4914133-b0cd-4d12-84d5-c99379e2324a" containerID="2cb934aa0cb867865c3cc63541e39eaa488349656fdbb8df851d66001a971602" exitCode=1 Dec 05 01:10:17 crc kubenswrapper[4990]: I1205 01:10:17.577101 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rdhk7" event={"ID":"c4914133-b0cd-4d12-84d5-c99379e2324a","Type":"ContainerDied","Data":"2cb934aa0cb867865c3cc63541e39eaa488349656fdbb8df851d66001a971602"} Dec 05 01:10:17 crc kubenswrapper[4990]: I1205 01:10:17.577161 4990 scope.go:117] "RemoveContainer" containerID="65f231716e0bbfb321bcc7932523d4eca424dcfffa36bad41061f46ac94cb566" Dec 05 01:10:17 crc kubenswrapper[4990]: I1205 01:10:17.577871 4990 scope.go:117] "RemoveContainer" containerID="2cb934aa0cb867865c3cc63541e39eaa488349656fdbb8df851d66001a971602" Dec 05 01:10:17 crc kubenswrapper[4990]: E1205 01:10:17.578218 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-rdhk7_openshift-multus(c4914133-b0cd-4d12-84d5-c99379e2324a)\"" pod="openshift-multus/multus-rdhk7" podUID="c4914133-b0cd-4d12-84d5-c99379e2324a" Dec 05 01:10:17 crc kubenswrapper[4990]: I1205 01:10:17.600619 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=28.600585432 podStartE2EDuration="28.600585432s" podCreationTimestamp="2025-12-05 01:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:09:51.975559011 +0000 UTC m=+90.351774372" watchObservedRunningTime="2025-12-05 01:10:17.600585432 +0000 UTC m=+115.976800823" Dec 05 01:10:17 crc kubenswrapper[4990]: I1205 01:10:17.930563 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:10:17 crc kubenswrapper[4990]: E1205 01:10:17.930773 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:10:18 crc kubenswrapper[4990]: I1205 01:10:18.583378 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rdhk7_c4914133-b0cd-4d12-84d5-c99379e2324a/kube-multus/1.log" Dec 05 01:10:18 crc kubenswrapper[4990]: I1205 01:10:18.930398 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:10:18 crc kubenswrapper[4990]: I1205 01:10:18.930398 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:10:18 crc kubenswrapper[4990]: I1205 01:10:18.930625 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:10:18 crc kubenswrapper[4990]: E1205 01:10:18.930725 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:10:18 crc kubenswrapper[4990]: E1205 01:10:18.930976 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:10:18 crc kubenswrapper[4990]: E1205 01:10:18.931157 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:10:19 crc kubenswrapper[4990]: I1205 01:10:19.930375 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:10:19 crc kubenswrapper[4990]: E1205 01:10:19.930576 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:10:20 crc kubenswrapper[4990]: I1205 01:10:20.929389 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:10:20 crc kubenswrapper[4990]: I1205 01:10:20.929388 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:10:20 crc kubenswrapper[4990]: I1205 01:10:20.929415 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:10:20 crc kubenswrapper[4990]: E1205 01:10:20.929730 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:10:20 crc kubenswrapper[4990]: E1205 01:10:20.929804 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:10:20 crc kubenswrapper[4990]: E1205 01:10:20.929893 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:10:21 crc kubenswrapper[4990]: E1205 01:10:21.865615 4990 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 05 01:10:21 crc kubenswrapper[4990]: I1205 01:10:21.929988 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:10:21 crc kubenswrapper[4990]: E1205 01:10:21.932274 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:10:21 crc kubenswrapper[4990]: I1205 01:10:21.933545 4990 scope.go:117] "RemoveContainer" containerID="ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b" Dec 05 01:10:22 crc kubenswrapper[4990]: E1205 01:10:22.025842 4990 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 05 01:10:22 crc kubenswrapper[4990]: I1205 01:10:22.603720 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4w6g9_3eeec70d-1c5c-434e-90bc-95620458151c/ovnkube-controller/3.log" Dec 05 01:10:22 crc kubenswrapper[4990]: I1205 01:10:22.607076 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" event={"ID":"3eeec70d-1c5c-434e-90bc-95620458151c","Type":"ContainerStarted","Data":"f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500"} Dec 05 01:10:22 crc kubenswrapper[4990]: I1205 01:10:22.607920 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:10:22 crc kubenswrapper[4990]: I1205 01:10:22.642277 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" podStartSLOduration=101.642241682 podStartE2EDuration="1m41.642241682s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:22.642145979 +0000 UTC m=+121.018361350" watchObservedRunningTime="2025-12-05 01:10:22.642241682 +0000 UTC m=+121.018457053" Dec 05 01:10:22 crc kubenswrapper[4990]: I1205 01:10:22.820867 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bxb6s"] Dec 05 01:10:22 crc kubenswrapper[4990]: I1205 01:10:22.821079 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:10:22 crc kubenswrapper[4990]: E1205 01:10:22.821242 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:10:22 crc kubenswrapper[4990]: I1205 01:10:22.930343 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:10:22 crc kubenswrapper[4990]: I1205 01:10:22.930390 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:10:22 crc kubenswrapper[4990]: I1205 01:10:22.930555 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:10:22 crc kubenswrapper[4990]: E1205 01:10:22.930675 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:10:22 crc kubenswrapper[4990]: E1205 01:10:22.930834 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:10:22 crc kubenswrapper[4990]: E1205 01:10:22.930916 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:10:23 crc kubenswrapper[4990]: I1205 01:10:23.930567 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:10:23 crc kubenswrapper[4990]: E1205 01:10:23.930817 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:10:24 crc kubenswrapper[4990]: I1205 01:10:24.930569 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:10:24 crc kubenswrapper[4990]: I1205 01:10:24.930599 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:10:24 crc kubenswrapper[4990]: I1205 01:10:24.930569 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:10:24 crc kubenswrapper[4990]: E1205 01:10:24.930794 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:10:24 crc kubenswrapper[4990]: E1205 01:10:24.931017 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:10:24 crc kubenswrapper[4990]: E1205 01:10:24.930732 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:10:25 crc kubenswrapper[4990]: I1205 01:10:25.929845 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:10:25 crc kubenswrapper[4990]: E1205 01:10:25.930116 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:10:26 crc kubenswrapper[4990]: I1205 01:10:26.930223 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:10:26 crc kubenswrapper[4990]: E1205 01:10:26.930368 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:10:26 crc kubenswrapper[4990]: I1205 01:10:26.930379 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:10:26 crc kubenswrapper[4990]: E1205 01:10:26.930635 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:10:26 crc kubenswrapper[4990]: I1205 01:10:26.930417 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:10:26 crc kubenswrapper[4990]: E1205 01:10:26.930776 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:10:27 crc kubenswrapper[4990]: E1205 01:10:27.027828 4990 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 05 01:10:27 crc kubenswrapper[4990]: I1205 01:10:27.930005 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:10:27 crc kubenswrapper[4990]: E1205 01:10:27.930609 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:10:28 crc kubenswrapper[4990]: I1205 01:10:28.141952 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:10:28 crc kubenswrapper[4990]: I1205 01:10:28.930231 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:10:28 crc kubenswrapper[4990]: I1205 01:10:28.930399 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:10:28 crc kubenswrapper[4990]: I1205 01:10:28.930765 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:10:28 crc kubenswrapper[4990]: E1205 01:10:28.930775 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:10:28 crc kubenswrapper[4990]: E1205 01:10:28.930952 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:10:28 crc kubenswrapper[4990]: E1205 01:10:28.931086 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:10:29 crc kubenswrapper[4990]: I1205 01:10:29.930202 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:10:29 crc kubenswrapper[4990]: E1205 01:10:29.930393 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:10:30 crc kubenswrapper[4990]: I1205 01:10:30.929559 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:10:30 crc kubenswrapper[4990]: I1205 01:10:30.929622 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:10:30 crc kubenswrapper[4990]: I1205 01:10:30.929718 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:10:30 crc kubenswrapper[4990]: E1205 01:10:30.929879 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:10:30 crc kubenswrapper[4990]: E1205 01:10:30.930114 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:10:30 crc kubenswrapper[4990]: E1205 01:10:30.930316 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:10:31 crc kubenswrapper[4990]: I1205 01:10:31.929899 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:10:31 crc kubenswrapper[4990]: E1205 01:10:31.932855 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:10:31 crc kubenswrapper[4990]: I1205 01:10:31.933543 4990 scope.go:117] "RemoveContainer" containerID="2cb934aa0cb867865c3cc63541e39eaa488349656fdbb8df851d66001a971602" Dec 05 01:10:32 crc kubenswrapper[4990]: E1205 01:10:32.028987 4990 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 05 01:10:32 crc kubenswrapper[4990]: I1205 01:10:32.648877 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rdhk7_c4914133-b0cd-4d12-84d5-c99379e2324a/kube-multus/1.log" Dec 05 01:10:32 crc kubenswrapper[4990]: I1205 01:10:32.648960 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rdhk7" event={"ID":"c4914133-b0cd-4d12-84d5-c99379e2324a","Type":"ContainerStarted","Data":"81c1369f091e1a32060c7351e6bfa8a258a7bc0cb73ee05d789c98d4a4a69887"} Dec 05 01:10:32 crc kubenswrapper[4990]: I1205 01:10:32.930235 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:10:32 crc kubenswrapper[4990]: I1205 01:10:32.930374 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:10:32 crc kubenswrapper[4990]: I1205 01:10:32.930235 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:10:32 crc kubenswrapper[4990]: E1205 01:10:32.930663 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:10:32 crc kubenswrapper[4990]: E1205 01:10:32.930736 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:10:32 crc kubenswrapper[4990]: E1205 01:10:32.930401 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:10:33 crc kubenswrapper[4990]: I1205 01:10:33.929602 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:10:33 crc kubenswrapper[4990]: E1205 01:10:33.929834 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:10:34 crc kubenswrapper[4990]: I1205 01:10:34.930406 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:10:34 crc kubenswrapper[4990]: I1205 01:10:34.930547 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:10:34 crc kubenswrapper[4990]: E1205 01:10:34.931152 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:10:34 crc kubenswrapper[4990]: I1205 01:10:34.930680 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:10:34 crc kubenswrapper[4990]: E1205 01:10:34.931357 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:10:34 crc kubenswrapper[4990]: E1205 01:10:34.931605 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:10:35 crc kubenswrapper[4990]: I1205 01:10:35.930218 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:10:35 crc kubenswrapper[4990]: E1205 01:10:35.930441 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxb6s" podUID="7760172e-33aa-4de9-bd10-6a92c0851c6e" Dec 05 01:10:36 crc kubenswrapper[4990]: I1205 01:10:36.929740 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:10:36 crc kubenswrapper[4990]: I1205 01:10:36.929847 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:10:36 crc kubenswrapper[4990]: I1205 01:10:36.929847 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:10:36 crc kubenswrapper[4990]: E1205 01:10:36.929995 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 01:10:36 crc kubenswrapper[4990]: E1205 01:10:36.930183 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 01:10:36 crc kubenswrapper[4990]: E1205 01:10:36.930326 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 01:10:37 crc kubenswrapper[4990]: I1205 01:10:37.929932 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:10:37 crc kubenswrapper[4990]: I1205 01:10:37.933172 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 05 01:10:37 crc kubenswrapper[4990]: I1205 01:10:37.933782 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.263821 4990 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.318268 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wqk2q"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.319058 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wqk2q" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.319883 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-t5772"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.320873 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-t5772" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.321715 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.322359 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.323565 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gxktl"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.323582 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.324898 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.325065 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5hlb"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.325756 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5hlb" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.338311 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.339601 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.340577 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.346744 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nbpzn"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.347544 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-nbpzn" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.349163 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x29jp"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.350086 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x29jp" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.350653 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-lwm9p"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.351380 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-lwm9p" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.365919 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rnwbq"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.372778 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.373071 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-rnwbq" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.373301 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.373527 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.374797 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.381210 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.385195 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.385584 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.385806 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.386007 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.386252 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.386448 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.391335 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jx8km"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.394175 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.394523 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.398222 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.398587 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.398978 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.412466 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.412717 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.413053 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.413185 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.413775 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.413999 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.414051 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.414082 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.414145 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.414213 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.414235 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.414280 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.414337 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.414404 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.414432 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.414437 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.414243 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.414440 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.414545 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.414464 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.414624 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.414711 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.414755 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.414950 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7rszv"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.415284 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7rszv" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.415342 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vq4v6"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.415447 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jx8km" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.416049 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq4v6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.418014 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.418306 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.421336 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-42lhm"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.421921 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ztdtk"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.422151 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lp5lw"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.422468 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.422657 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-42lhm" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.422764 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ztdtk" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.449968 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aabcff3-3471-4c91-bebe-e91dca530018-config\") pod \"openshift-apiserver-operator-796bbdcf4f-x29jp\" (UID: \"6aabcff3-3471-4c91-bebe-e91dca530018\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x29jp" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450017 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f30745ea-b3b2-4595-82b0-d04d2010e590-trusted-ca\") pod \"console-operator-58897d9998-nbpzn\" (UID: \"f30745ea-b3b2-4595-82b0-d04d2010e590\") " pod="openshift-console-operator/console-operator-58897d9998-nbpzn" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450048 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1291830b-16cb-4eab-9d73-6edd38a86882-etcd-client\") pod \"apiserver-76f77b778f-gxktl\" (UID: \"1291830b-16cb-4eab-9d73-6edd38a86882\") " pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450070 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/77c0a017-4985-4ba7-bc61-35e7a17f3950-etcd-client\") pod \"apiserver-7bbb656c7d-qqplk\" (UID: \"77c0a017-4985-4ba7-bc61-35e7a17f3950\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450100 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/77c0a017-4985-4ba7-bc61-35e7a17f3950-audit-dir\") pod \"apiserver-7bbb656c7d-qqplk\" (UID: \"77c0a017-4985-4ba7-bc61-35e7a17f3950\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450118 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpk6k\" (UniqueName: \"kubernetes.io/projected/bbf181f5-152c-4424-9206-9b2981b901ac-kube-api-access-bpk6k\") pod \"route-controller-manager-6576b87f9c-wqk2q\" (UID: \"bbf181f5-152c-4424-9206-9b2981b901ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wqk2q" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450138 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77c0a017-4985-4ba7-bc61-35e7a17f3950-serving-cert\") pod \"apiserver-7bbb656c7d-qqplk\" (UID: \"77c0a017-4985-4ba7-bc61-35e7a17f3950\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450159 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvgcb\" (UniqueName: \"kubernetes.io/projected/f30745ea-b3b2-4595-82b0-d04d2010e590-kube-api-access-cvgcb\") pod \"console-operator-58897d9998-nbpzn\" (UID: \"f30745ea-b3b2-4595-82b0-d04d2010e590\") " pod="openshift-console-operator/console-operator-58897d9998-nbpzn" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450179 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1291830b-16cb-4eab-9d73-6edd38a86882-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gxktl\" (UID: \"1291830b-16cb-4eab-9d73-6edd38a86882\") " pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450196 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1291830b-16cb-4eab-9d73-6edd38a86882-audit\") pod \"apiserver-76f77b778f-gxktl\" (UID: \"1291830b-16cb-4eab-9d73-6edd38a86882\") " pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450217 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbf181f5-152c-4424-9206-9b2981b901ac-config\") pod \"route-controller-manager-6576b87f9c-wqk2q\" (UID: \"bbf181f5-152c-4424-9206-9b2981b901ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wqk2q" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450236 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1291830b-16cb-4eab-9d73-6edd38a86882-serving-cert\") pod \"apiserver-76f77b778f-gxktl\" (UID: \"1291830b-16cb-4eab-9d73-6edd38a86882\") " pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450253 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1291830b-16cb-4eab-9d73-6edd38a86882-etcd-serving-ca\") pod \"apiserver-76f77b778f-gxktl\" (UID: \"1291830b-16cb-4eab-9d73-6edd38a86882\") " pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450272 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnhcw\" (UniqueName: \"kubernetes.io/projected/ad32df98-dc0d-4b57-b02b-b04aa0d9db65-kube-api-access-rnhcw\") pod \"dns-operator-744455d44c-rnwbq\" (UID: \"ad32df98-dc0d-4b57-b02b-b04aa0d9db65\") " pod="openshift-dns-operator/dns-operator-744455d44c-rnwbq" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450300 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f9467079-b825-4a3d-b56c-254057a3b5fb-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-t5772\" (UID: \"f9467079-b825-4a3d-b56c-254057a3b5fb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t5772" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450328 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1291830b-16cb-4eab-9d73-6edd38a86882-encryption-config\") pod \"apiserver-76f77b778f-gxktl\" (UID: \"1291830b-16cb-4eab-9d73-6edd38a86882\") " pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450357 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1291830b-16cb-4eab-9d73-6edd38a86882-node-pullsecrets\") pod \"apiserver-76f77b778f-gxktl\" (UID: \"1291830b-16cb-4eab-9d73-6edd38a86882\") " pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450375 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/77c0a017-4985-4ba7-bc61-35e7a17f3950-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-qqplk\" (UID: \"77c0a017-4985-4ba7-bc61-35e7a17f3950\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450398 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/870dfc46-9efe-4184-8bf9-7c8a6a70f6e9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-j5hlb\" (UID: \"870dfc46-9efe-4184-8bf9-7c8a6a70f6e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5hlb" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450418 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f22ld\" (UniqueName: \"kubernetes.io/projected/870dfc46-9efe-4184-8bf9-7c8a6a70f6e9-kube-api-access-f22ld\") pod \"cluster-samples-operator-665b6dd947-j5hlb\" (UID: \"870dfc46-9efe-4184-8bf9-7c8a6a70f6e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5hlb" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450438 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1291830b-16cb-4eab-9d73-6edd38a86882-image-import-ca\") pod \"apiserver-76f77b778f-gxktl\" (UID: \"1291830b-16cb-4eab-9d73-6edd38a86882\") " pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450455 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bbf181f5-152c-4424-9206-9b2981b901ac-client-ca\") pod \"route-controller-manager-6576b87f9c-wqk2q\" (UID: \"bbf181f5-152c-4424-9206-9b2981b901ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wqk2q" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450529 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz4f6\" (UniqueName: \"kubernetes.io/projected/6aabcff3-3471-4c91-bebe-e91dca530018-kube-api-access-gz4f6\") pod \"openshift-apiserver-operator-796bbdcf4f-x29jp\" (UID: \"6aabcff3-3471-4c91-bebe-e91dca530018\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x29jp" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450553 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77c0a017-4985-4ba7-bc61-35e7a17f3950-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-qqplk\" (UID: \"77c0a017-4985-4ba7-bc61-35e7a17f3950\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450586 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f30745ea-b3b2-4595-82b0-d04d2010e590-serving-cert\") pod \"console-operator-58897d9998-nbpzn\" (UID: \"f30745ea-b3b2-4595-82b0-d04d2010e590\") " pod="openshift-console-operator/console-operator-58897d9998-nbpzn" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450604 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9vt7\" (UniqueName: \"kubernetes.io/projected/f9467079-b825-4a3d-b56c-254057a3b5fb-kube-api-access-h9vt7\") pod \"machine-api-operator-5694c8668f-t5772\" (UID: \"f9467079-b825-4a3d-b56c-254057a3b5fb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t5772" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450622 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6n42\" (UniqueName: \"kubernetes.io/projected/77c0a017-4985-4ba7-bc61-35e7a17f3950-kube-api-access-z6n42\") pod \"apiserver-7bbb656c7d-qqplk\" (UID: \"77c0a017-4985-4ba7-bc61-35e7a17f3950\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450643 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9467079-b825-4a3d-b56c-254057a3b5fb-config\") pod \"machine-api-operator-5694c8668f-t5772\" (UID: \"f9467079-b825-4a3d-b56c-254057a3b5fb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t5772" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450661 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbf181f5-152c-4424-9206-9b2981b901ac-serving-cert\") pod \"route-controller-manager-6576b87f9c-wqk2q\" (UID: \"bbf181f5-152c-4424-9206-9b2981b901ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wqk2q" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450678 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1291830b-16cb-4eab-9d73-6edd38a86882-audit-dir\") pod \"apiserver-76f77b778f-gxktl\" (UID: \"1291830b-16cb-4eab-9d73-6edd38a86882\") " pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450695 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ad32df98-dc0d-4b57-b02b-b04aa0d9db65-metrics-tls\") pod \"dns-operator-744455d44c-rnwbq\" (UID: \"ad32df98-dc0d-4b57-b02b-b04aa0d9db65\") " pod="openshift-dns-operator/dns-operator-744455d44c-rnwbq" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450713 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f9467079-b825-4a3d-b56c-254057a3b5fb-images\") pod \"machine-api-operator-5694c8668f-t5772\" (UID: \"f9467079-b825-4a3d-b56c-254057a3b5fb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t5772" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450728 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f30745ea-b3b2-4595-82b0-d04d2010e590-config\") pod \"console-operator-58897d9998-nbpzn\" (UID: \"f30745ea-b3b2-4595-82b0-d04d2010e590\") " pod="openshift-console-operator/console-operator-58897d9998-nbpzn" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450745 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1291830b-16cb-4eab-9d73-6edd38a86882-config\") pod \"apiserver-76f77b778f-gxktl\" (UID: \"1291830b-16cb-4eab-9d73-6edd38a86882\") " pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450762 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6aabcff3-3471-4c91-bebe-e91dca530018-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-x29jp\" (UID: \"6aabcff3-3471-4c91-bebe-e91dca530018\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x29jp" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450789 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbc6n\" (UniqueName: \"kubernetes.io/projected/1291830b-16cb-4eab-9d73-6edd38a86882-kube-api-access-bbc6n\") pod \"apiserver-76f77b778f-gxktl\" (UID: \"1291830b-16cb-4eab-9d73-6edd38a86882\") " pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450806 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx59g\" (UniqueName: \"kubernetes.io/projected/a567c246-6c6e-4f05-bde4-dc9d9dfc3699-kube-api-access-wx59g\") pod \"downloads-7954f5f757-lwm9p\" (UID: \"a567c246-6c6e-4f05-bde4-dc9d9dfc3699\") " pod="openshift-console/downloads-7954f5f757-lwm9p" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450822 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/77c0a017-4985-4ba7-bc61-35e7a17f3950-audit-policies\") pod \"apiserver-7bbb656c7d-qqplk\" (UID: \"77c0a017-4985-4ba7-bc61-35e7a17f3950\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.450847 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/77c0a017-4985-4ba7-bc61-35e7a17f3950-encryption-config\") pod \"apiserver-7bbb656c7d-qqplk\" (UID: \"77c0a017-4985-4ba7-bc61-35e7a17f3950\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.456911 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.457783 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.457916 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.459760 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.460038 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.460159 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.461296 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.461413 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.461589 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.461602 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.462226 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.462529 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.462601 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.462637 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.462705 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.462781 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.462846 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.462872 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.463060 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.463136 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.463283 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.463383 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.469551 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-g6z24"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.470256 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-g6z24" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.471388 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.471471 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.472520 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.472710 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.472752 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.472899 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.472934 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.473012 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.473075 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.473143 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.473177 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.473279 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.473389 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.473395 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.473517 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.473557 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.473519 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.474150 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-lj2c6"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.474169 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.475058 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-lj2c6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.475718 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dhllr"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.476257 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dhllr" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.476295 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lxk9k"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.476875 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lxk9k" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.489439 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.499326 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vchzm"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.500886 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.508882 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.511982 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qxd47"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.514684 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.514908 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.524914 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.525376 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.526448 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.528385 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.528822 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.529735 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.530207 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.531551 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.531946 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.532036 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.532110 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.532450 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.532671 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.533503 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-slxt5"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.533915 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-sxmsc"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.534011 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qxd47" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.534262 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mxfcm"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.534421 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-slxt5" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.534560 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2wzr5"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.534782 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-sxmsc" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.534875 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vgq6w"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.534890 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mxfcm" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.534952 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2wzr5" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.538739 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.538941 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.541080 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.541232 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.545061 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.545161 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.546101 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vgq6w" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.546937 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.549730 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fqh9n"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.550468 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fqh9n" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.551445 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/77c0a017-4985-4ba7-bc61-35e7a17f3950-audit-dir\") pod \"apiserver-7bbb656c7d-qqplk\" (UID: \"77c0a017-4985-4ba7-bc61-35e7a17f3950\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.551520 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1be72bab-0eb7-4dc5-bc23-9f2eb261d76c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ztdtk\" (UID: \"1be72bab-0eb7-4dc5-bc23-9f2eb261d76c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ztdtk" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.551544 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-audit-policies\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.551563 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e00f3a37-11c1-4862-b7db-c324afcf2214-auth-proxy-config\") pod \"machine-approver-56656f9798-42lhm\" (UID: \"e00f3a37-11c1-4862-b7db-c324afcf2214\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-42lhm" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.551581 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvgcb\" (UniqueName: \"kubernetes.io/projected/f30745ea-b3b2-4595-82b0-d04d2010e590-kube-api-access-cvgcb\") pod \"console-operator-58897d9998-nbpzn\" (UID: \"f30745ea-b3b2-4595-82b0-d04d2010e590\") " pod="openshift-console-operator/console-operator-58897d9998-nbpzn" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.551597 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/526c34da-f910-47e3-bccc-eff5e2fb7b59-config\") pod \"etcd-operator-b45778765-lj2c6\" (UID: \"526c34da-f910-47e3-bccc-eff5e2fb7b59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lj2c6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.551599 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/77c0a017-4985-4ba7-bc61-35e7a17f3950-audit-dir\") pod \"apiserver-7bbb656c7d-qqplk\" (UID: \"77c0a017-4985-4ba7-bc61-35e7a17f3950\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.551617 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1291830b-16cb-4eab-9d73-6edd38a86882-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gxktl\" (UID: \"1291830b-16cb-4eab-9d73-6edd38a86882\") " pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.551636 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpk6k\" (UniqueName: \"kubernetes.io/projected/bbf181f5-152c-4424-9206-9b2981b901ac-kube-api-access-bpk6k\") pod \"route-controller-manager-6576b87f9c-wqk2q\" (UID: \"bbf181f5-152c-4424-9206-9b2981b901ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wqk2q" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.551653 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77c0a017-4985-4ba7-bc61-35e7a17f3950-serving-cert\") pod \"apiserver-7bbb656c7d-qqplk\" (UID: \"77c0a017-4985-4ba7-bc61-35e7a17f3950\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.551676 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.551701 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1291830b-16cb-4eab-9d73-6edd38a86882-audit\") pod \"apiserver-76f77b778f-gxktl\" (UID: \"1291830b-16cb-4eab-9d73-6edd38a86882\") " pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.551717 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbf181f5-152c-4424-9206-9b2981b901ac-config\") pod \"route-controller-manager-6576b87f9c-wqk2q\" (UID: \"bbf181f5-152c-4424-9206-9b2981b901ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wqk2q" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.551734 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1291830b-16cb-4eab-9d73-6edd38a86882-serving-cert\") pod \"apiserver-76f77b778f-gxktl\" (UID: \"1291830b-16cb-4eab-9d73-6edd38a86882\") " pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.551751 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/be821014-926b-4b47-a347-e778ef1d085a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dhllr\" (UID: \"be821014-926b-4b47-a347-e778ef1d085a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dhllr" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.551848 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1291830b-16cb-4eab-9d73-6edd38a86882-etcd-serving-ca\") pod \"apiserver-76f77b778f-gxktl\" (UID: \"1291830b-16cb-4eab-9d73-6edd38a86882\") " pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.551871 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnhcw\" (UniqueName: \"kubernetes.io/projected/ad32df98-dc0d-4b57-b02b-b04aa0d9db65-kube-api-access-rnhcw\") pod \"dns-operator-744455d44c-rnwbq\" (UID: \"ad32df98-dc0d-4b57-b02b-b04aa0d9db65\") " pod="openshift-dns-operator/dns-operator-744455d44c-rnwbq" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.551906 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.551927 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcl84\" (UniqueName: \"kubernetes.io/projected/b8bb3b38-72ab-4295-8b62-99f5f424c711-kube-api-access-zcl84\") pod \"console-f9d7485db-g6z24\" (UID: \"b8bb3b38-72ab-4295-8b62-99f5f424c711\") " pod="openshift-console/console-f9d7485db-g6z24" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.551945 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1291830b-16cb-4eab-9d73-6edd38a86882-encryption-config\") pod \"apiserver-76f77b778f-gxktl\" (UID: \"1291830b-16cb-4eab-9d73-6edd38a86882\") " pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.551962 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f9467079-b825-4a3d-b56c-254057a3b5fb-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-t5772\" (UID: \"f9467079-b825-4a3d-b56c-254057a3b5fb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t5772" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.551983 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b8bb3b38-72ab-4295-8b62-99f5f424c711-oauth-serving-cert\") pod \"console-f9d7485db-g6z24\" (UID: \"b8bb3b38-72ab-4295-8b62-99f5f424c711\") " pod="openshift-console/console-f9d7485db-g6z24" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552000 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/af7e46a8-0740-4f17-9d89-97eb924a5d39-trusted-ca\") pod \"ingress-operator-5b745b69d9-vq4v6\" (UID: \"af7e46a8-0740-4f17-9d89-97eb924a5d39\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq4v6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552019 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/526c34da-f910-47e3-bccc-eff5e2fb7b59-serving-cert\") pod \"etcd-operator-b45778765-lj2c6\" (UID: \"526c34da-f910-47e3-bccc-eff5e2fb7b59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lj2c6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552045 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be821014-926b-4b47-a347-e778ef1d085a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dhllr\" (UID: \"be821014-926b-4b47-a347-e778ef1d085a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dhllr" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552064 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552086 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b8bb3b38-72ab-4295-8b62-99f5f424c711-console-oauth-config\") pod \"console-f9d7485db-g6z24\" (UID: \"b8bb3b38-72ab-4295-8b62-99f5f424c711\") " pod="openshift-console/console-f9d7485db-g6z24" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552105 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/77c0a017-4985-4ba7-bc61-35e7a17f3950-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-qqplk\" (UID: \"77c0a017-4985-4ba7-bc61-35e7a17f3950\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552125 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552152 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1291830b-16cb-4eab-9d73-6edd38a86882-node-pullsecrets\") pod \"apiserver-76f77b778f-gxktl\" (UID: \"1291830b-16cb-4eab-9d73-6edd38a86882\") " pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552172 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/1a0a0305-99f9-45d5-b298-383c5f6cc4f6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7rszv\" (UID: \"1a0a0305-99f9-45d5-b298-383c5f6cc4f6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7rszv" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552192 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjvvv\" (UniqueName: \"kubernetes.io/projected/1a0a0305-99f9-45d5-b298-383c5f6cc4f6-kube-api-access-rjvvv\") pod \"control-plane-machine-set-operator-78cbb6b69f-7rszv\" (UID: \"1a0a0305-99f9-45d5-b298-383c5f6cc4f6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7rszv" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552210 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj7j6\" (UniqueName: \"kubernetes.io/projected/e00f3a37-11c1-4862-b7db-c324afcf2214-kube-api-access-mj7j6\") pod \"machine-approver-56656f9798-42lhm\" (UID: \"e00f3a37-11c1-4862-b7db-c324afcf2214\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-42lhm" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552226 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/526c34da-f910-47e3-bccc-eff5e2fb7b59-etcd-service-ca\") pod \"etcd-operator-b45778765-lj2c6\" (UID: \"526c34da-f910-47e3-bccc-eff5e2fb7b59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lj2c6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552245 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/870dfc46-9efe-4184-8bf9-7c8a6a70f6e9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-j5hlb\" (UID: \"870dfc46-9efe-4184-8bf9-7c8a6a70f6e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5hlb" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552265 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f22ld\" (UniqueName: \"kubernetes.io/projected/870dfc46-9efe-4184-8bf9-7c8a6a70f6e9-kube-api-access-f22ld\") pod \"cluster-samples-operator-665b6dd947-j5hlb\" (UID: \"870dfc46-9efe-4184-8bf9-7c8a6a70f6e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5hlb" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552282 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1291830b-16cb-4eab-9d73-6edd38a86882-image-import-ca\") pod \"apiserver-76f77b778f-gxktl\" (UID: \"1291830b-16cb-4eab-9d73-6edd38a86882\") " pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552302 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bbf181f5-152c-4424-9206-9b2981b901ac-client-ca\") pod \"route-controller-manager-6576b87f9c-wqk2q\" (UID: \"bbf181f5-152c-4424-9206-9b2981b901ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wqk2q" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552326 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552347 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk9m2\" (UniqueName: \"kubernetes.io/projected/13a6b9e4-2ddb-4f53-889d-484647055582-kube-api-access-qk9m2\") pod \"openshift-controller-manager-operator-756b6f6bc6-jx8km\" (UID: \"13a6b9e4-2ddb-4f53-889d-484647055582\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jx8km" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552365 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8bb3b38-72ab-4295-8b62-99f5f424c711-trusted-ca-bundle\") pod \"console-f9d7485db-g6z24\" (UID: \"b8bb3b38-72ab-4295-8b62-99f5f424c711\") " pod="openshift-console/console-f9d7485db-g6z24" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552387 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bgn9\" (UniqueName: \"kubernetes.io/projected/526c34da-f910-47e3-bccc-eff5e2fb7b59-kube-api-access-5bgn9\") pod \"etcd-operator-b45778765-lj2c6\" (UID: \"526c34da-f910-47e3-bccc-eff5e2fb7b59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lj2c6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552422 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz4f6\" (UniqueName: \"kubernetes.io/projected/6aabcff3-3471-4c91-bebe-e91dca530018-kube-api-access-gz4f6\") pod \"openshift-apiserver-operator-796bbdcf4f-x29jp\" (UID: \"6aabcff3-3471-4c91-bebe-e91dca530018\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x29jp" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552438 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1be72bab-0eb7-4dc5-bc23-9f2eb261d76c-config\") pod \"kube-controller-manager-operator-78b949d7b-ztdtk\" (UID: \"1be72bab-0eb7-4dc5-bc23-9f2eb261d76c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ztdtk" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552456 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552472 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1be72bab-0eb7-4dc5-bc23-9f2eb261d76c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ztdtk\" (UID: \"1be72bab-0eb7-4dc5-bc23-9f2eb261d76c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ztdtk" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552509 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77c0a017-4985-4ba7-bc61-35e7a17f3950-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-qqplk\" (UID: \"77c0a017-4985-4ba7-bc61-35e7a17f3950\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552526 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b8bb3b38-72ab-4295-8b62-99f5f424c711-service-ca\") pod \"console-f9d7485db-g6z24\" (UID: \"b8bb3b38-72ab-4295-8b62-99f5f424c711\") " pod="openshift-console/console-f9d7485db-g6z24" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552542 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/af7e46a8-0740-4f17-9d89-97eb924a5d39-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vq4v6\" (UID: \"af7e46a8-0740-4f17-9d89-97eb924a5d39\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq4v6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552560 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9vt7\" (UniqueName: \"kubernetes.io/projected/f9467079-b825-4a3d-b56c-254057a3b5fb-kube-api-access-h9vt7\") pod \"machine-api-operator-5694c8668f-t5772\" (UID: \"f9467079-b825-4a3d-b56c-254057a3b5fb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t5772" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552575 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f30745ea-b3b2-4595-82b0-d04d2010e590-serving-cert\") pod \"console-operator-58897d9998-nbpzn\" (UID: \"f30745ea-b3b2-4595-82b0-d04d2010e590\") " pod="openshift-console-operator/console-operator-58897d9998-nbpzn" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552591 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552607 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-audit-dir\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552623 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/be821014-926b-4b47-a347-e778ef1d085a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dhllr\" (UID: \"be821014-926b-4b47-a347-e778ef1d085a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dhllr" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552639 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a6b9e4-2ddb-4f53-889d-484647055582-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jx8km\" (UID: \"13a6b9e4-2ddb-4f53-889d-484647055582\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jx8km" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552655 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552670 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/af7e46a8-0740-4f17-9d89-97eb924a5d39-metrics-tls\") pod \"ingress-operator-5b745b69d9-vq4v6\" (UID: \"af7e46a8-0740-4f17-9d89-97eb924a5d39\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq4v6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552687 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbf181f5-152c-4424-9206-9b2981b901ac-serving-cert\") pod \"route-controller-manager-6576b87f9c-wqk2q\" (UID: \"bbf181f5-152c-4424-9206-9b2981b901ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wqk2q" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552706 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6n42\" (UniqueName: \"kubernetes.io/projected/77c0a017-4985-4ba7-bc61-35e7a17f3950-kube-api-access-z6n42\") pod \"apiserver-7bbb656c7d-qqplk\" (UID: \"77c0a017-4985-4ba7-bc61-35e7a17f3950\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552722 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9467079-b825-4a3d-b56c-254057a3b5fb-config\") pod \"machine-api-operator-5694c8668f-t5772\" (UID: \"f9467079-b825-4a3d-b56c-254057a3b5fb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t5772" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552739 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvfwg\" (UniqueName: \"kubernetes.io/projected/af7e46a8-0740-4f17-9d89-97eb924a5d39-kube-api-access-xvfwg\") pod \"ingress-operator-5b745b69d9-vq4v6\" (UID: \"af7e46a8-0740-4f17-9d89-97eb924a5d39\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq4v6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552756 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1291830b-16cb-4eab-9d73-6edd38a86882-audit-dir\") pod \"apiserver-76f77b778f-gxktl\" (UID: \"1291830b-16cb-4eab-9d73-6edd38a86882\") " pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552771 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ad32df98-dc0d-4b57-b02b-b04aa0d9db65-metrics-tls\") pod \"dns-operator-744455d44c-rnwbq\" (UID: \"ad32df98-dc0d-4b57-b02b-b04aa0d9db65\") " pod="openshift-dns-operator/dns-operator-744455d44c-rnwbq" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552787 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552803 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f30745ea-b3b2-4595-82b0-d04d2010e590-config\") pod \"console-operator-58897d9998-nbpzn\" (UID: \"f30745ea-b3b2-4595-82b0-d04d2010e590\") " pod="openshift-console-operator/console-operator-58897d9998-nbpzn" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552819 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28kqx\" (UniqueName: \"kubernetes.io/projected/be821014-926b-4b47-a347-e778ef1d085a-kube-api-access-28kqx\") pod \"cluster-image-registry-operator-dc59b4c8b-dhllr\" (UID: \"be821014-926b-4b47-a347-e778ef1d085a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dhllr" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552836 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncmmn\" (UniqueName: \"kubernetes.io/projected/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-kube-api-access-ncmmn\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552852 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f9467079-b825-4a3d-b56c-254057a3b5fb-images\") pod \"machine-api-operator-5694c8668f-t5772\" (UID: \"f9467079-b825-4a3d-b56c-254057a3b5fb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t5772" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552867 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b8bb3b38-72ab-4295-8b62-99f5f424c711-console-config\") pod \"console-f9d7485db-g6z24\" (UID: \"b8bb3b38-72ab-4295-8b62-99f5f424c711\") " pod="openshift-console/console-f9d7485db-g6z24" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.552983 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1291830b-16cb-4eab-9d73-6edd38a86882-config\") pod \"apiserver-76f77b778f-gxktl\" (UID: \"1291830b-16cb-4eab-9d73-6edd38a86882\") " pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.553000 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6aabcff3-3471-4c91-bebe-e91dca530018-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-x29jp\" (UID: \"6aabcff3-3471-4c91-bebe-e91dca530018\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x29jp" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.553016 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8bb3b38-72ab-4295-8b62-99f5f424c711-console-serving-cert\") pod \"console-f9d7485db-g6z24\" (UID: \"b8bb3b38-72ab-4295-8b62-99f5f424c711\") " pod="openshift-console/console-f9d7485db-g6z24" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.553030 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e00f3a37-11c1-4862-b7db-c324afcf2214-machine-approver-tls\") pod \"machine-approver-56656f9798-42lhm\" (UID: \"e00f3a37-11c1-4862-b7db-c324afcf2214\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-42lhm" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.553058 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.553081 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbc6n\" (UniqueName: \"kubernetes.io/projected/1291830b-16cb-4eab-9d73-6edd38a86882-kube-api-access-bbc6n\") pod \"apiserver-76f77b778f-gxktl\" (UID: \"1291830b-16cb-4eab-9d73-6edd38a86882\") " pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.553100 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx59g\" (UniqueName: \"kubernetes.io/projected/a567c246-6c6e-4f05-bde4-dc9d9dfc3699-kube-api-access-wx59g\") pod \"downloads-7954f5f757-lwm9p\" (UID: \"a567c246-6c6e-4f05-bde4-dc9d9dfc3699\") " pod="openshift-console/downloads-7954f5f757-lwm9p" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.553117 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13a6b9e4-2ddb-4f53-889d-484647055582-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jx8km\" (UID: \"13a6b9e4-2ddb-4f53-889d-484647055582\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jx8km" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.553134 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/77c0a017-4985-4ba7-bc61-35e7a17f3950-audit-policies\") pod \"apiserver-7bbb656c7d-qqplk\" (UID: \"77c0a017-4985-4ba7-bc61-35e7a17f3950\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.556836 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e00f3a37-11c1-4862-b7db-c324afcf2214-config\") pod \"machine-approver-56656f9798-42lhm\" (UID: \"e00f3a37-11c1-4862-b7db-c324afcf2214\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-42lhm" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.556879 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/526c34da-f910-47e3-bccc-eff5e2fb7b59-etcd-client\") pod \"etcd-operator-b45778765-lj2c6\" (UID: \"526c34da-f910-47e3-bccc-eff5e2fb7b59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lj2c6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.556905 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/77c0a017-4985-4ba7-bc61-35e7a17f3950-encryption-config\") pod \"apiserver-7bbb656c7d-qqplk\" (UID: \"77c0a017-4985-4ba7-bc61-35e7a17f3950\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.556924 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aabcff3-3471-4c91-bebe-e91dca530018-config\") pod \"openshift-apiserver-operator-796bbdcf4f-x29jp\" (UID: \"6aabcff3-3471-4c91-bebe-e91dca530018\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x29jp" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.556944 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f30745ea-b3b2-4595-82b0-d04d2010e590-trusted-ca\") pod \"console-operator-58897d9998-nbpzn\" (UID: \"f30745ea-b3b2-4595-82b0-d04d2010e590\") " pod="openshift-console-operator/console-operator-58897d9998-nbpzn" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.556962 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/526c34da-f910-47e3-bccc-eff5e2fb7b59-etcd-ca\") pod \"etcd-operator-b45778765-lj2c6\" (UID: \"526c34da-f910-47e3-bccc-eff5e2fb7b59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lj2c6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.556981 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.557006 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1291830b-16cb-4eab-9d73-6edd38a86882-etcd-client\") pod \"apiserver-76f77b778f-gxktl\" (UID: \"1291830b-16cb-4eab-9d73-6edd38a86882\") " pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.557025 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/77c0a017-4985-4ba7-bc61-35e7a17f3950-etcd-client\") pod \"apiserver-7bbb656c7d-qqplk\" (UID: \"77c0a017-4985-4ba7-bc61-35e7a17f3950\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.559044 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-wdvxr"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.561023 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/77c0a017-4985-4ba7-bc61-35e7a17f3950-etcd-client\") pod \"apiserver-7bbb656c7d-qqplk\" (UID: \"77c0a017-4985-4ba7-bc61-35e7a17f3950\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.561302 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6aabcff3-3471-4c91-bebe-e91dca530018-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-x29jp\" (UID: \"6aabcff3-3471-4c91-bebe-e91dca530018\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x29jp" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.555232 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bbf181f5-152c-4424-9206-9b2981b901ac-client-ca\") pod \"route-controller-manager-6576b87f9c-wqk2q\" (UID: \"bbf181f5-152c-4424-9206-9b2981b901ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wqk2q" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.561461 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f30745ea-b3b2-4595-82b0-d04d2010e590-serving-cert\") pod \"console-operator-58897d9998-nbpzn\" (UID: \"f30745ea-b3b2-4595-82b0-d04d2010e590\") " pod="openshift-console-operator/console-operator-58897d9998-nbpzn" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.554475 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77c0a017-4985-4ba7-bc61-35e7a17f3950-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-qqplk\" (UID: \"77c0a017-4985-4ba7-bc61-35e7a17f3950\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.553933 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1291830b-16cb-4eab-9d73-6edd38a86882-audit\") pod \"apiserver-76f77b778f-gxktl\" (UID: \"1291830b-16cb-4eab-9d73-6edd38a86882\") " pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.553211 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1291830b-16cb-4eab-9d73-6edd38a86882-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gxktl\" (UID: \"1291830b-16cb-4eab-9d73-6edd38a86882\") " pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.554610 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbf181f5-152c-4424-9206-9b2981b901ac-config\") pod \"route-controller-manager-6576b87f9c-wqk2q\" (UID: \"bbf181f5-152c-4424-9206-9b2981b901ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wqk2q" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.562048 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/77c0a017-4985-4ba7-bc61-35e7a17f3950-audit-policies\") pod \"apiserver-7bbb656c7d-qqplk\" (UID: \"77c0a017-4985-4ba7-bc61-35e7a17f3950\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.562382 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f9467079-b825-4a3d-b56c-254057a3b5fb-images\") pod \"machine-api-operator-5694c8668f-t5772\" (UID: \"f9467079-b825-4a3d-b56c-254057a3b5fb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t5772" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.562720 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.562988 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ad32df98-dc0d-4b57-b02b-b04aa0d9db65-metrics-tls\") pod \"dns-operator-744455d44c-rnwbq\" (UID: \"ad32df98-dc0d-4b57-b02b-b04aa0d9db65\") " pod="openshift-dns-operator/dns-operator-744455d44c-rnwbq" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.563431 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aabcff3-3471-4c91-bebe-e91dca530018-config\") pod \"openshift-apiserver-operator-796bbdcf4f-x29jp\" (UID: \"6aabcff3-3471-4c91-bebe-e91dca530018\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x29jp" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.555332 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1291830b-16cb-4eab-9d73-6edd38a86882-audit-dir\") pod \"apiserver-76f77b778f-gxktl\" (UID: \"1291830b-16cb-4eab-9d73-6edd38a86882\") " pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.555655 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1291830b-16cb-4eab-9d73-6edd38a86882-image-import-ca\") pod \"apiserver-76f77b778f-gxktl\" (UID: \"1291830b-16cb-4eab-9d73-6edd38a86882\") " pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.563605 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-t5772"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.563643 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-glp4b"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.563656 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/870dfc46-9efe-4184-8bf9-7c8a6a70f6e9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-j5hlb\" (UID: \"870dfc46-9efe-4184-8bf9-7c8a6a70f6e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5hlb" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.563818 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdvxr" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.563886 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9467079-b825-4a3d-b56c-254057a3b5fb-config\") pod \"machine-api-operator-5694c8668f-t5772\" (UID: \"f9467079-b825-4a3d-b56c-254057a3b5fb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t5772" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.555711 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1291830b-16cb-4eab-9d73-6edd38a86882-node-pullsecrets\") pod \"apiserver-76f77b778f-gxktl\" (UID: \"1291830b-16cb-4eab-9d73-6edd38a86882\") " pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.555880 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1291830b-16cb-4eab-9d73-6edd38a86882-config\") pod \"apiserver-76f77b778f-gxktl\" (UID: \"1291830b-16cb-4eab-9d73-6edd38a86882\") " pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.564277 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1291830b-16cb-4eab-9d73-6edd38a86882-serving-cert\") pod \"apiserver-76f77b778f-gxktl\" (UID: \"1291830b-16cb-4eab-9d73-6edd38a86882\") " pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.564577 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-glp4b" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.564614 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/77c0a017-4985-4ba7-bc61-35e7a17f3950-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-qqplk\" (UID: \"77c0a017-4985-4ba7-bc61-35e7a17f3950\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.568172 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbf181f5-152c-4424-9206-9b2981b901ac-serving-cert\") pod \"route-controller-manager-6576b87f9c-wqk2q\" (UID: \"bbf181f5-152c-4424-9206-9b2981b901ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wqk2q" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.568664 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1291830b-16cb-4eab-9d73-6edd38a86882-etcd-serving-ca\") pod \"apiserver-76f77b778f-gxktl\" (UID: \"1291830b-16cb-4eab-9d73-6edd38a86882\") " pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.574119 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77c0a017-4985-4ba7-bc61-35e7a17f3950-serving-cert\") pod \"apiserver-7bbb656c7d-qqplk\" (UID: \"77c0a017-4985-4ba7-bc61-35e7a17f3950\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.574218 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.575310 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f30745ea-b3b2-4595-82b0-d04d2010e590-trusted-ca\") pod \"console-operator-58897d9998-nbpzn\" (UID: \"f30745ea-b3b2-4595-82b0-d04d2010e590\") " pod="openshift-console-operator/console-operator-58897d9998-nbpzn" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.575562 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/77c0a017-4985-4ba7-bc61-35e7a17f3950-encryption-config\") pod \"apiserver-7bbb656c7d-qqplk\" (UID: \"77c0a017-4985-4ba7-bc61-35e7a17f3950\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.575948 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1291830b-16cb-4eab-9d73-6edd38a86882-etcd-client\") pod \"apiserver-76f77b778f-gxktl\" (UID: \"1291830b-16cb-4eab-9d73-6edd38a86882\") " pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.576188 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1291830b-16cb-4eab-9d73-6edd38a86882-encryption-config\") pod \"apiserver-76f77b778f-gxktl\" (UID: \"1291830b-16cb-4eab-9d73-6edd38a86882\") " pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.576247 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f30745ea-b3b2-4595-82b0-d04d2010e590-config\") pod \"console-operator-58897d9998-nbpzn\" (UID: \"f30745ea-b3b2-4595-82b0-d04d2010e590\") " pod="openshift-console-operator/console-operator-58897d9998-nbpzn" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.576675 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f9467079-b825-4a3d-b56c-254057a3b5fb-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-t5772\" (UID: \"f9467079-b825-4a3d-b56c-254057a3b5fb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t5772" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.581349 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ggfl6"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.582454 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vm68w"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.583316 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ggfl6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.584046 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zvbs8"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.584117 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vm68w" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.586171 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f5g9m"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.586407 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zvbs8" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.586986 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f5g9m" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.587733 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5thbl"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.588638 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5thbl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.589126 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.589259 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nbpzn"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.590371 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nkp9t"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.591584 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nkp9t" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.592325 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-zp4n6"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.593175 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zp4n6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.593433 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wqk2q"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.594541 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gdsdh"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.595478 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414940-6mmxx"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.595671 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-gdsdh" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.596189 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414940-6mmxx" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.596579 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x29jp"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.597602 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6b99n"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.599215 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-lwm9p"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.599310 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6b99n" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.599787 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5hlb"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.600311 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.600866 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gxktl"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.602161 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ggfl6"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.603450 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7rszv"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.604615 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rnwbq"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.605804 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-sxmsc"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.606957 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qxd47"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.611172 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lp5lw"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.612848 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vchzm"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.613665 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-g6z24"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.615657 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lxk9k"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.616670 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dhllr"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.617896 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vgq6w"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.619905 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vq4v6"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.622799 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.623278 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nkp9t"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.623996 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5thbl"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.626850 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-glp4b"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.627906 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ztdtk"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.630274 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-lj2c6"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.631406 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mxfcm"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.632914 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fqh9n"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.634614 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zvbs8"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.635234 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-wdvxr"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.636367 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vm68w"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.637436 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jx8km"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.638438 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-5jm9j"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.641383 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5jm9j" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.643076 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2wzr5"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.645028 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.653977 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f5g9m"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.657557 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-7qlxt"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.657846 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcl84\" (UniqueName: \"kubernetes.io/projected/b8bb3b38-72ab-4295-8b62-99f5f424c711-kube-api-access-zcl84\") pod \"console-f9d7485db-g6z24\" (UID: \"b8bb3b38-72ab-4295-8b62-99f5f424c711\") " pod="openshift-console/console-f9d7485db-g6z24" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.657889 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/526c34da-f910-47e3-bccc-eff5e2fb7b59-serving-cert\") pod \"etcd-operator-b45778765-lj2c6\" (UID: \"526c34da-f910-47e3-bccc-eff5e2fb7b59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lj2c6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.657908 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be821014-926b-4b47-a347-e778ef1d085a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dhllr\" (UID: \"be821014-926b-4b47-a347-e778ef1d085a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dhllr" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.657928 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.657951 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b8bb3b38-72ab-4295-8b62-99f5f424c711-console-oauth-config\") pod \"console-f9d7485db-g6z24\" (UID: \"b8bb3b38-72ab-4295-8b62-99f5f424c711\") " pod="openshift-console/console-f9d7485db-g6z24" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.658414 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-7qlxt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.657977 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.658703 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/1a0a0305-99f9-45d5-b298-383c5f6cc4f6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7rszv\" (UID: \"1a0a0305-99f9-45d5-b298-383c5f6cc4f6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7rszv" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.658746 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d272e8c9-2d62-4783-94bb-a6a997e08c46-metrics-certs\") pod \"router-default-5444994796-slxt5\" (UID: \"d272e8c9-2d62-4783-94bb-a6a997e08c46\") " pod="openshift-ingress/router-default-5444994796-slxt5" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.658775 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-264xj\" (UniqueName: \"kubernetes.io/projected/60dfec50-dbbe-4c06-adfe-bd82cbcc8ef7-kube-api-access-264xj\") pod \"machine-config-operator-74547568cd-wdvxr\" (UID: \"60dfec50-dbbe-4c06-adfe-bd82cbcc8ef7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdvxr" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.658803 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d272e8c9-2d62-4783-94bb-a6a997e08c46-default-certificate\") pod \"router-default-5444994796-slxt5\" (UID: \"d272e8c9-2d62-4783-94bb-a6a997e08c46\") " pod="openshift-ingress/router-default-5444994796-slxt5" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.658837 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.658884 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr47c\" (UniqueName: \"kubernetes.io/projected/73009b29-5f92-4552-969c-669c459575ae-kube-api-access-jr47c\") pod \"controller-manager-879f6c89f-2wzr5\" (UID: \"73009b29-5f92-4552-969c-669c459575ae\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2wzr5" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.658915 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bgn9\" (UniqueName: \"kubernetes.io/projected/526c34da-f910-47e3-bccc-eff5e2fb7b59-kube-api-access-5bgn9\") pod \"etcd-operator-b45778765-lj2c6\" (UID: \"526c34da-f910-47e3-bccc-eff5e2fb7b59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lj2c6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.658941 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8bb3b38-72ab-4295-8b62-99f5f424c711-trusted-ca-bundle\") pod \"console-f9d7485db-g6z24\" (UID: \"b8bb3b38-72ab-4295-8b62-99f5f424c711\") " pod="openshift-console/console-f9d7485db-g6z24" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.658971 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.658999 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1be72bab-0eb7-4dc5-bc23-9f2eb261d76c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ztdtk\" (UID: \"1be72bab-0eb7-4dc5-bc23-9f2eb261d76c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ztdtk" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.659033 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b8bb3b38-72ab-4295-8b62-99f5f424c711-service-ca\") pod \"console-f9d7485db-g6z24\" (UID: \"b8bb3b38-72ab-4295-8b62-99f5f424c711\") " pod="openshift-console/console-f9d7485db-g6z24" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.659065 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96ebb813-fa46-40e3-b728-147dd064f9d4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fqh9n\" (UID: \"96ebb813-fa46-40e3-b728-147dd064f9d4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fqh9n" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.659088 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/60dfec50-dbbe-4c06-adfe-bd82cbcc8ef7-proxy-tls\") pod \"machine-config-operator-74547568cd-wdvxr\" (UID: \"60dfec50-dbbe-4c06-adfe-bd82cbcc8ef7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdvxr" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.659114 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/60dfec50-dbbe-4c06-adfe-bd82cbcc8ef7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-wdvxr\" (UID: \"60dfec50-dbbe-4c06-adfe-bd82cbcc8ef7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdvxr" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.659144 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/be821014-926b-4b47-a347-e778ef1d085a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dhllr\" (UID: \"be821014-926b-4b47-a347-e778ef1d085a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dhllr" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.659173 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-audit-dir\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.659211 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.659240 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/af7e46a8-0740-4f17-9d89-97eb924a5d39-metrics-tls\") pod \"ingress-operator-5b745b69d9-vq4v6\" (UID: \"af7e46a8-0740-4f17-9d89-97eb924a5d39\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq4v6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.659267 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvfwg\" (UniqueName: \"kubernetes.io/projected/af7e46a8-0740-4f17-9d89-97eb924a5d39-kube-api-access-xvfwg\") pod \"ingress-operator-5b745b69d9-vq4v6\" (UID: \"af7e46a8-0740-4f17-9d89-97eb924a5d39\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq4v6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.658828 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5jm9j"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.659330 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncmmn\" (UniqueName: \"kubernetes.io/projected/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-kube-api-access-ncmmn\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.659365 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e00f3a37-11c1-4862-b7db-c324afcf2214-machine-approver-tls\") pod \"machine-approver-56656f9798-42lhm\" (UID: \"e00f3a37-11c1-4862-b7db-c324afcf2214\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-42lhm" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.659391 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/73009b29-5f92-4552-969c-669c459575ae-client-ca\") pod \"controller-manager-879f6c89f-2wzr5\" (UID: \"73009b29-5f92-4552-969c-669c459575ae\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2wzr5" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.659434 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f8ee3821-a917-4dba-80f5-f7ad854541cf-srv-cert\") pod \"catalog-operator-68c6474976-ggfl6\" (UID: \"f8ee3821-a917-4dba-80f5-f7ad854541cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ggfl6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.659464 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ea184ab-2c2a-4fbc-8598-01783702463f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5thbl\" (UID: \"9ea184ab-2c2a-4fbc-8598-01783702463f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5thbl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.659525 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/526c34da-f910-47e3-bccc-eff5e2fb7b59-etcd-ca\") pod \"etcd-operator-b45778765-lj2c6\" (UID: \"526c34da-f910-47e3-bccc-eff5e2fb7b59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lj2c6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.659553 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.659578 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-audit-policies\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.659611 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/526c34da-f910-47e3-bccc-eff5e2fb7b59-config\") pod \"etcd-operator-b45778765-lj2c6\" (UID: \"526c34da-f910-47e3-bccc-eff5e2fb7b59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lj2c6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.659642 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d272e8c9-2d62-4783-94bb-a6a997e08c46-service-ca-bundle\") pod \"router-default-5444994796-slxt5\" (UID: \"d272e8c9-2d62-4783-94bb-a6a997e08c46\") " pod="openshift-ingress/router-default-5444994796-slxt5" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.659656 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be821014-926b-4b47-a347-e778ef1d085a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dhllr\" (UID: \"be821014-926b-4b47-a347-e778ef1d085a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dhllr" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.659669 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zlwc\" (UniqueName: \"kubernetes.io/projected/1f42168c-ddac-4d6d-a6c3-d8b3d2beeb6d-kube-api-access-4zlwc\") pod \"migrator-59844c95c7-qxd47\" (UID: \"1f42168c-ddac-4d6d-a6c3-d8b3d2beeb6d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qxd47" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.659717 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.659750 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b8bb3b38-72ab-4295-8b62-99f5f424c711-oauth-serving-cert\") pod \"console-f9d7485db-g6z24\" (UID: \"b8bb3b38-72ab-4295-8b62-99f5f424c711\") " pod="openshift-console/console-f9d7485db-g6z24" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.659778 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/af7e46a8-0740-4f17-9d89-97eb924a5d39-trusted-ca\") pod \"ingress-operator-5b745b69d9-vq4v6\" (UID: \"af7e46a8-0740-4f17-9d89-97eb924a5d39\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq4v6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.659804 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjvvv\" (UniqueName: \"kubernetes.io/projected/1a0a0305-99f9-45d5-b298-383c5f6cc4f6-kube-api-access-rjvvv\") pod \"control-plane-machine-set-operator-78cbb6b69f-7rszv\" (UID: \"1a0a0305-99f9-45d5-b298-383c5f6cc4f6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7rszv" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.659833 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj7j6\" (UniqueName: \"kubernetes.io/projected/e00f3a37-11c1-4862-b7db-c324afcf2214-kube-api-access-mj7j6\") pod \"machine-approver-56656f9798-42lhm\" (UID: \"e00f3a37-11c1-4862-b7db-c324afcf2214\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-42lhm" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.659860 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/526c34da-f910-47e3-bccc-eff5e2fb7b59-etcd-service-ca\") pod \"etcd-operator-b45778765-lj2c6\" (UID: \"526c34da-f910-47e3-bccc-eff5e2fb7b59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lj2c6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.659888 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/385ea779-f9e4-49c3-acc7-309d7ef1d174-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vgq6w\" (UID: \"385ea779-f9e4-49c3-acc7-309d7ef1d174\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vgq6w" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.659920 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk9m2\" (UniqueName: \"kubernetes.io/projected/13a6b9e4-2ddb-4f53-889d-484647055582-kube-api-access-qk9m2\") pod \"openshift-controller-manager-operator-756b6f6bc6-jx8km\" (UID: \"13a6b9e4-2ddb-4f53-889d-484647055582\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jx8km" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.659948 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/385ea779-f9e4-49c3-acc7-309d7ef1d174-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vgq6w\" (UID: \"385ea779-f9e4-49c3-acc7-309d7ef1d174\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vgq6w" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.659975 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzcdc\" (UniqueName: \"kubernetes.io/projected/96ebb813-fa46-40e3-b728-147dd064f9d4-kube-api-access-lzcdc\") pod \"kube-storage-version-migrator-operator-b67b599dd-fqh9n\" (UID: \"96ebb813-fa46-40e3-b728-147dd064f9d4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fqh9n" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.660131 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1be72bab-0eb7-4dc5-bc23-9f2eb261d76c-config\") pod \"kube-controller-manager-operator-78b949d7b-ztdtk\" (UID: \"1be72bab-0eb7-4dc5-bc23-9f2eb261d76c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ztdtk" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.660138 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.660177 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfqpl\" (UniqueName: \"kubernetes.io/projected/d272e8c9-2d62-4783-94bb-a6a997e08c46-kube-api-access-nfqpl\") pod \"router-default-5444994796-slxt5\" (UID: \"d272e8c9-2d62-4783-94bb-a6a997e08c46\") " pod="openshift-ingress/router-default-5444994796-slxt5" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.660215 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/97c12d0b-8415-47ff-a3cf-c8e906620182-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vm68w\" (UID: \"97c12d0b-8415-47ff-a3cf-c8e906620182\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vm68w" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.660254 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.660278 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/af7e46a8-0740-4f17-9d89-97eb924a5d39-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vq4v6\" (UID: \"af7e46a8-0740-4f17-9d89-97eb924a5d39\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq4v6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.660535 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/97c12d0b-8415-47ff-a3cf-c8e906620182-proxy-tls\") pod \"machine-config-controller-84d6567774-vm68w\" (UID: \"97c12d0b-8415-47ff-a3cf-c8e906620182\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vm68w" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.660624 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a6b9e4-2ddb-4f53-889d-484647055582-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jx8km\" (UID: \"13a6b9e4-2ddb-4f53-889d-484647055582\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jx8km" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.660719 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f8ee3821-a917-4dba-80f5-f7ad854541cf-profile-collector-cert\") pod \"catalog-operator-68c6474976-ggfl6\" (UID: \"f8ee3821-a917-4dba-80f5-f7ad854541cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ggfl6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.660822 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqv26\" (UniqueName: \"kubernetes.io/projected/97c12d0b-8415-47ff-a3cf-c8e906620182-kube-api-access-rqv26\") pod \"machine-config-controller-84d6567774-vm68w\" (UID: \"97c12d0b-8415-47ff-a3cf-c8e906620182\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vm68w" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.661066 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.661100 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28kqx\" (UniqueName: \"kubernetes.io/projected/be821014-926b-4b47-a347-e778ef1d085a-kube-api-access-28kqx\") pod \"cluster-image-registry-operator-dc59b4c8b-dhllr\" (UID: \"be821014-926b-4b47-a347-e778ef1d085a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dhllr" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.661127 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b8bb3b38-72ab-4295-8b62-99f5f424c711-console-config\") pod \"console-f9d7485db-g6z24\" (UID: \"b8bb3b38-72ab-4295-8b62-99f5f424c711\") " pod="openshift-console/console-f9d7485db-g6z24" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.661152 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73009b29-5f92-4552-969c-669c459575ae-serving-cert\") pod \"controller-manager-879f6c89f-2wzr5\" (UID: \"73009b29-5f92-4552-969c-669c459575ae\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2wzr5" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.661178 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.661204 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8bb3b38-72ab-4295-8b62-99f5f424c711-console-serving-cert\") pod \"console-f9d7485db-g6z24\" (UID: \"b8bb3b38-72ab-4295-8b62-99f5f424c711\") " pod="openshift-console/console-f9d7485db-g6z24" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.661419 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b8bb3b38-72ab-4295-8b62-99f5f424c711-service-ca\") pod \"console-f9d7485db-g6z24\" (UID: \"b8bb3b38-72ab-4295-8b62-99f5f424c711\") " pod="openshift-console/console-f9d7485db-g6z24" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.661816 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8bb3b38-72ab-4295-8b62-99f5f424c711-trusted-ca-bundle\") pod \"console-f9d7485db-g6z24\" (UID: \"b8bb3b38-72ab-4295-8b62-99f5f424c711\") " pod="openshift-console/console-f9d7485db-g6z24" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.662230 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.662591 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.663003 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9b3d56bb-86a8-44d1-a8b6-4d669458e1e5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-glp4b\" (UID: \"9b3d56bb-86a8-44d1-a8b6-4d669458e1e5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-glp4b" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.663092 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/385ea779-f9e4-49c3-acc7-309d7ef1d174-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vgq6w\" (UID: \"385ea779-f9e4-49c3-acc7-309d7ef1d174\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vgq6w" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.663155 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13a6b9e4-2ddb-4f53-889d-484647055582-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jx8km\" (UID: \"13a6b9e4-2ddb-4f53-889d-484647055582\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jx8km" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.663189 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/526c34da-f910-47e3-bccc-eff5e2fb7b59-etcd-client\") pod \"etcd-operator-b45778765-lj2c6\" (UID: \"526c34da-f910-47e3-bccc-eff5e2fb7b59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lj2c6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.663253 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e00f3a37-11c1-4862-b7db-c324afcf2214-config\") pod \"machine-approver-56656f9798-42lhm\" (UID: \"e00f3a37-11c1-4862-b7db-c324afcf2214\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-42lhm" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.663312 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/73009b29-5f92-4552-969c-669c459575ae-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2wzr5\" (UID: \"73009b29-5f92-4552-969c-669c459575ae\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2wzr5" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.663349 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwtst\" (UniqueName: \"kubernetes.io/projected/9ea184ab-2c2a-4fbc-8598-01783702463f-kube-api-access-mwtst\") pod \"package-server-manager-789f6589d5-5thbl\" (UID: \"9ea184ab-2c2a-4fbc-8598-01783702463f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5thbl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.663413 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jns4b\" (UniqueName: \"kubernetes.io/projected/9b3d56bb-86a8-44d1-a8b6-4d669458e1e5-kube-api-access-jns4b\") pod \"multus-admission-controller-857f4d67dd-glp4b\" (UID: \"9b3d56bb-86a8-44d1-a8b6-4d669458e1e5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-glp4b" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.663614 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkfd5\" (UniqueName: \"kubernetes.io/projected/f8ee3821-a917-4dba-80f5-f7ad854541cf-kube-api-access-rkfd5\") pod \"catalog-operator-68c6474976-ggfl6\" (UID: \"f8ee3821-a917-4dba-80f5-f7ad854541cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ggfl6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.663656 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1be72bab-0eb7-4dc5-bc23-9f2eb261d76c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ztdtk\" (UID: \"1be72bab-0eb7-4dc5-bc23-9f2eb261d76c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ztdtk" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.663803 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e00f3a37-11c1-4862-b7db-c324afcf2214-auth-proxy-config\") pod \"machine-approver-56656f9798-42lhm\" (UID: \"e00f3a37-11c1-4862-b7db-c324afcf2214\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-42lhm" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.663895 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/be821014-926b-4b47-a347-e778ef1d085a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dhllr\" (UID: \"be821014-926b-4b47-a347-e778ef1d085a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dhllr" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.664237 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96ebb813-fa46-40e3-b728-147dd064f9d4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fqh9n\" (UID: \"96ebb813-fa46-40e3-b728-147dd064f9d4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fqh9n" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.664285 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.664582 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/1a0a0305-99f9-45d5-b298-383c5f6cc4f6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7rszv\" (UID: \"1a0a0305-99f9-45d5-b298-383c5f6cc4f6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7rszv" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.664942 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b8bb3b38-72ab-4295-8b62-99f5f424c711-console-oauth-config\") pod \"console-f9d7485db-g6z24\" (UID: \"b8bb3b38-72ab-4295-8b62-99f5f424c711\") " pod="openshift-console/console-f9d7485db-g6z24" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.665020 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e00f3a37-11c1-4862-b7db-c324afcf2214-machine-approver-tls\") pod \"machine-approver-56656f9798-42lhm\" (UID: \"e00f3a37-11c1-4862-b7db-c324afcf2214\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-42lhm" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.665059 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-audit-policies\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.665064 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-audit-dir\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.665115 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-zp4n6"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.665270 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/526c34da-f910-47e3-bccc-eff5e2fb7b59-serving-cert\") pod \"etcd-operator-b45778765-lj2c6\" (UID: \"526c34da-f910-47e3-bccc-eff5e2fb7b59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lj2c6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.665434 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d272e8c9-2d62-4783-94bb-a6a997e08c46-stats-auth\") pod \"router-default-5444994796-slxt5\" (UID: \"d272e8c9-2d62-4783-94bb-a6a997e08c46\") " pod="openshift-ingress/router-default-5444994796-slxt5" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.665458 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.665500 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/be821014-926b-4b47-a347-e778ef1d085a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dhllr\" (UID: \"be821014-926b-4b47-a347-e778ef1d085a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dhllr" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.665524 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/60dfec50-dbbe-4c06-adfe-bd82cbcc8ef7-images\") pod \"machine-config-operator-74547568cd-wdvxr\" (UID: \"60dfec50-dbbe-4c06-adfe-bd82cbcc8ef7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdvxr" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.665542 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73009b29-5f92-4552-969c-669c459575ae-config\") pod \"controller-manager-879f6c89f-2wzr5\" (UID: \"73009b29-5f92-4552-969c-669c459575ae\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2wzr5" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.663521 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/526c34da-f910-47e3-bccc-eff5e2fb7b59-etcd-ca\") pod \"etcd-operator-b45778765-lj2c6\" (UID: \"526c34da-f910-47e3-bccc-eff5e2fb7b59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lj2c6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.666378 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1be72bab-0eb7-4dc5-bc23-9f2eb261d76c-config\") pod \"kube-controller-manager-operator-78b949d7b-ztdtk\" (UID: \"1be72bab-0eb7-4dc5-bc23-9f2eb261d76c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ztdtk" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.666823 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/af7e46a8-0740-4f17-9d89-97eb924a5d39-metrics-tls\") pod \"ingress-operator-5b745b69d9-vq4v6\" (UID: \"af7e46a8-0740-4f17-9d89-97eb924a5d39\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq4v6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.666915 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.666952 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b8bb3b38-72ab-4295-8b62-99f5f424c711-oauth-serving-cert\") pod \"console-f9d7485db-g6z24\" (UID: \"b8bb3b38-72ab-4295-8b62-99f5f424c711\") " pod="openshift-console/console-f9d7485db-g6z24" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.667299 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/526c34da-f910-47e3-bccc-eff5e2fb7b59-config\") pod \"etcd-operator-b45778765-lj2c6\" (UID: \"526c34da-f910-47e3-bccc-eff5e2fb7b59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lj2c6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.667357 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e00f3a37-11c1-4862-b7db-c324afcf2214-config\") pod \"machine-approver-56656f9798-42lhm\" (UID: \"e00f3a37-11c1-4862-b7db-c324afcf2214\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-42lhm" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.667869 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.668203 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.668350 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/526c34da-f910-47e3-bccc-eff5e2fb7b59-etcd-service-ca\") pod \"etcd-operator-b45778765-lj2c6\" (UID: \"526c34da-f910-47e3-bccc-eff5e2fb7b59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lj2c6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.668600 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/af7e46a8-0740-4f17-9d89-97eb924a5d39-trusted-ca\") pod \"ingress-operator-5b745b69d9-vq4v6\" (UID: \"af7e46a8-0740-4f17-9d89-97eb924a5d39\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq4v6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.668876 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e00f3a37-11c1-4862-b7db-c324afcf2214-auth-proxy-config\") pod \"machine-approver-56656f9798-42lhm\" (UID: \"e00f3a37-11c1-4862-b7db-c324afcf2214\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-42lhm" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.669008 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8bb3b38-72ab-4295-8b62-99f5f424c711-console-serving-cert\") pod \"console-f9d7485db-g6z24\" (UID: \"b8bb3b38-72ab-4295-8b62-99f5f424c711\") " pod="openshift-console/console-f9d7485db-g6z24" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.669303 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a6b9e4-2ddb-4f53-889d-484647055582-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jx8km\" (UID: \"13a6b9e4-2ddb-4f53-889d-484647055582\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jx8km" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.669348 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13a6b9e4-2ddb-4f53-889d-484647055582-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jx8km\" (UID: \"13a6b9e4-2ddb-4f53-889d-484647055582\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jx8km" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.670010 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b8bb3b38-72ab-4295-8b62-99f5f424c711-console-config\") pod \"console-f9d7485db-g6z24\" (UID: \"b8bb3b38-72ab-4295-8b62-99f5f424c711\") " pod="openshift-console/console-f9d7485db-g6z24" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.671230 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gdsdh"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.671291 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.671605 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.671903 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.671990 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.672779 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1be72bab-0eb7-4dc5-bc23-9f2eb261d76c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ztdtk\" (UID: \"1be72bab-0eb7-4dc5-bc23-9f2eb261d76c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ztdtk" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.673184 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6b99n"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.673941 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/526c34da-f910-47e3-bccc-eff5e2fb7b59-etcd-client\") pod \"etcd-operator-b45778765-lj2c6\" (UID: \"526c34da-f910-47e3-bccc-eff5e2fb7b59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lj2c6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.674611 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414940-6mmxx"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.675887 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-mn2kc"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.678132 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-mn2kc"] Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.678253 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-mn2kc" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.681628 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.700569 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.722262 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.743165 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.760637 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.766636 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/73009b29-5f92-4552-969c-669c459575ae-client-ca\") pod \"controller-manager-879f6c89f-2wzr5\" (UID: \"73009b29-5f92-4552-969c-669c459575ae\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2wzr5" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.766681 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f8ee3821-a917-4dba-80f5-f7ad854541cf-srv-cert\") pod \"catalog-operator-68c6474976-ggfl6\" (UID: \"f8ee3821-a917-4dba-80f5-f7ad854541cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ggfl6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.766719 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ea184ab-2c2a-4fbc-8598-01783702463f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5thbl\" (UID: \"9ea184ab-2c2a-4fbc-8598-01783702463f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5thbl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.766769 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d272e8c9-2d62-4783-94bb-a6a997e08c46-service-ca-bundle\") pod \"router-default-5444994796-slxt5\" (UID: \"d272e8c9-2d62-4783-94bb-a6a997e08c46\") " pod="openshift-ingress/router-default-5444994796-slxt5" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.766796 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zlwc\" (UniqueName: \"kubernetes.io/projected/1f42168c-ddac-4d6d-a6c3-d8b3d2beeb6d-kube-api-access-4zlwc\") pod \"migrator-59844c95c7-qxd47\" (UID: \"1f42168c-ddac-4d6d-a6c3-d8b3d2beeb6d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qxd47" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.766851 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/385ea779-f9e4-49c3-acc7-309d7ef1d174-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vgq6w\" (UID: \"385ea779-f9e4-49c3-acc7-309d7ef1d174\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vgq6w" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.766888 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/385ea779-f9e4-49c3-acc7-309d7ef1d174-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vgq6w\" (UID: \"385ea779-f9e4-49c3-acc7-309d7ef1d174\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vgq6w" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.766918 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzcdc\" (UniqueName: \"kubernetes.io/projected/96ebb813-fa46-40e3-b728-147dd064f9d4-kube-api-access-lzcdc\") pod \"kube-storage-version-migrator-operator-b67b599dd-fqh9n\" (UID: \"96ebb813-fa46-40e3-b728-147dd064f9d4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fqh9n" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.766966 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfqpl\" (UniqueName: \"kubernetes.io/projected/d272e8c9-2d62-4783-94bb-a6a997e08c46-kube-api-access-nfqpl\") pod \"router-default-5444994796-slxt5\" (UID: \"d272e8c9-2d62-4783-94bb-a6a997e08c46\") " pod="openshift-ingress/router-default-5444994796-slxt5" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.766998 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/97c12d0b-8415-47ff-a3cf-c8e906620182-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vm68w\" (UID: \"97c12d0b-8415-47ff-a3cf-c8e906620182\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vm68w" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.767068 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/97c12d0b-8415-47ff-a3cf-c8e906620182-proxy-tls\") pod \"machine-config-controller-84d6567774-vm68w\" (UID: \"97c12d0b-8415-47ff-a3cf-c8e906620182\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vm68w" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.767097 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqv26\" (UniqueName: \"kubernetes.io/projected/97c12d0b-8415-47ff-a3cf-c8e906620182-kube-api-access-rqv26\") pod \"machine-config-controller-84d6567774-vm68w\" (UID: \"97c12d0b-8415-47ff-a3cf-c8e906620182\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vm68w" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.767126 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f8ee3821-a917-4dba-80f5-f7ad854541cf-profile-collector-cert\") pod \"catalog-operator-68c6474976-ggfl6\" (UID: \"f8ee3821-a917-4dba-80f5-f7ad854541cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ggfl6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.767163 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73009b29-5f92-4552-969c-669c459575ae-serving-cert\") pod \"controller-manager-879f6c89f-2wzr5\" (UID: \"73009b29-5f92-4552-969c-669c459575ae\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2wzr5" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.767189 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9b3d56bb-86a8-44d1-a8b6-4d669458e1e5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-glp4b\" (UID: \"9b3d56bb-86a8-44d1-a8b6-4d669458e1e5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-glp4b" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.767222 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/385ea779-f9e4-49c3-acc7-309d7ef1d174-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vgq6w\" (UID: \"385ea779-f9e4-49c3-acc7-309d7ef1d174\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vgq6w" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.767250 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/73009b29-5f92-4552-969c-669c459575ae-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2wzr5\" (UID: \"73009b29-5f92-4552-969c-669c459575ae\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2wzr5" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.767274 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwtst\" (UniqueName: \"kubernetes.io/projected/9ea184ab-2c2a-4fbc-8598-01783702463f-kube-api-access-mwtst\") pod \"package-server-manager-789f6589d5-5thbl\" (UID: \"9ea184ab-2c2a-4fbc-8598-01783702463f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5thbl" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.767300 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jns4b\" (UniqueName: \"kubernetes.io/projected/9b3d56bb-86a8-44d1-a8b6-4d669458e1e5-kube-api-access-jns4b\") pod \"multus-admission-controller-857f4d67dd-glp4b\" (UID: \"9b3d56bb-86a8-44d1-a8b6-4d669458e1e5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-glp4b" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.767326 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkfd5\" (UniqueName: \"kubernetes.io/projected/f8ee3821-a917-4dba-80f5-f7ad854541cf-kube-api-access-rkfd5\") pod \"catalog-operator-68c6474976-ggfl6\" (UID: \"f8ee3821-a917-4dba-80f5-f7ad854541cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ggfl6" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.767384 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96ebb813-fa46-40e3-b728-147dd064f9d4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fqh9n\" (UID: \"96ebb813-fa46-40e3-b728-147dd064f9d4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fqh9n" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.767415 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d272e8c9-2d62-4783-94bb-a6a997e08c46-stats-auth\") pod \"router-default-5444994796-slxt5\" (UID: \"d272e8c9-2d62-4783-94bb-a6a997e08c46\") " pod="openshift-ingress/router-default-5444994796-slxt5" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.767450 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/60dfec50-dbbe-4c06-adfe-bd82cbcc8ef7-images\") pod \"machine-config-operator-74547568cd-wdvxr\" (UID: \"60dfec50-dbbe-4c06-adfe-bd82cbcc8ef7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdvxr" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.767476 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73009b29-5f92-4552-969c-669c459575ae-config\") pod \"controller-manager-879f6c89f-2wzr5\" (UID: \"73009b29-5f92-4552-969c-669c459575ae\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2wzr5" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.767555 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d272e8c9-2d62-4783-94bb-a6a997e08c46-metrics-certs\") pod \"router-default-5444994796-slxt5\" (UID: \"d272e8c9-2d62-4783-94bb-a6a997e08c46\") " pod="openshift-ingress/router-default-5444994796-slxt5" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.767583 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-264xj\" (UniqueName: \"kubernetes.io/projected/60dfec50-dbbe-4c06-adfe-bd82cbcc8ef7-kube-api-access-264xj\") pod \"machine-config-operator-74547568cd-wdvxr\" (UID: \"60dfec50-dbbe-4c06-adfe-bd82cbcc8ef7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdvxr" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.767609 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d272e8c9-2d62-4783-94bb-a6a997e08c46-default-certificate\") pod \"router-default-5444994796-slxt5\" (UID: \"d272e8c9-2d62-4783-94bb-a6a997e08c46\") " pod="openshift-ingress/router-default-5444994796-slxt5" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.767648 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr47c\" (UniqueName: \"kubernetes.io/projected/73009b29-5f92-4552-969c-669c459575ae-kube-api-access-jr47c\") pod \"controller-manager-879f6c89f-2wzr5\" (UID: \"73009b29-5f92-4552-969c-669c459575ae\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2wzr5" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.767688 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96ebb813-fa46-40e3-b728-147dd064f9d4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fqh9n\" (UID: \"96ebb813-fa46-40e3-b728-147dd064f9d4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fqh9n" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.767713 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/60dfec50-dbbe-4c06-adfe-bd82cbcc8ef7-proxy-tls\") pod \"machine-config-operator-74547568cd-wdvxr\" (UID: \"60dfec50-dbbe-4c06-adfe-bd82cbcc8ef7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdvxr" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.767737 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/60dfec50-dbbe-4c06-adfe-bd82cbcc8ef7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-wdvxr\" (UID: \"60dfec50-dbbe-4c06-adfe-bd82cbcc8ef7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdvxr" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.768454 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/60dfec50-dbbe-4c06-adfe-bd82cbcc8ef7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-wdvxr\" (UID: \"60dfec50-dbbe-4c06-adfe-bd82cbcc8ef7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdvxr" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.769504 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/97c12d0b-8415-47ff-a3cf-c8e906620182-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vm68w\" (UID: \"97c12d0b-8415-47ff-a3cf-c8e906620182\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vm68w" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.782207 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.793028 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d272e8c9-2d62-4783-94bb-a6a997e08c46-stats-auth\") pod \"router-default-5444994796-slxt5\" (UID: \"d272e8c9-2d62-4783-94bb-a6a997e08c46\") " pod="openshift-ingress/router-default-5444994796-slxt5" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.801817 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.820954 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.842531 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.852423 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d272e8c9-2d62-4783-94bb-a6a997e08c46-default-certificate\") pod \"router-default-5444994796-slxt5\" (UID: \"d272e8c9-2d62-4783-94bb-a6a997e08c46\") " pod="openshift-ingress/router-default-5444994796-slxt5" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.862531 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.869181 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d272e8c9-2d62-4783-94bb-a6a997e08c46-service-ca-bundle\") pod \"router-default-5444994796-slxt5\" (UID: \"d272e8c9-2d62-4783-94bb-a6a997e08c46\") " pod="openshift-ingress/router-default-5444994796-slxt5" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.882775 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.895073 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d272e8c9-2d62-4783-94bb-a6a997e08c46-metrics-certs\") pod \"router-default-5444994796-slxt5\" (UID: \"d272e8c9-2d62-4783-94bb-a6a997e08c46\") " pod="openshift-ingress/router-default-5444994796-slxt5" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.901472 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.921777 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.929414 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.929530 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.929414 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.942839 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.962361 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 05 01:10:38 crc kubenswrapper[4990]: I1205 01:10:38.981992 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.002061 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.037032 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.044344 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.062475 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.083526 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.102517 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.121905 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.142572 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.149899 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73009b29-5f92-4552-969c-669c459575ae-config\") pod \"controller-manager-879f6c89f-2wzr5\" (UID: \"73009b29-5f92-4552-969c-669c459575ae\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2wzr5" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.162579 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.182679 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.202944 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.209902 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/73009b29-5f92-4552-969c-669c459575ae-client-ca\") pod \"controller-manager-879f6c89f-2wzr5\" (UID: \"73009b29-5f92-4552-969c-669c459575ae\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2wzr5" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.231022 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.240992 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/73009b29-5f92-4552-969c-669c459575ae-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2wzr5\" (UID: \"73009b29-5f92-4552-969c-669c459575ae\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2wzr5" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.242674 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.261960 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.275652 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73009b29-5f92-4552-969c-669c459575ae-serving-cert\") pod \"controller-manager-879f6c89f-2wzr5\" (UID: \"73009b29-5f92-4552-969c-669c459575ae\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2wzr5" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.282415 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.302195 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.315122 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/385ea779-f9e4-49c3-acc7-309d7ef1d174-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vgq6w\" (UID: \"385ea779-f9e4-49c3-acc7-309d7ef1d174\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vgq6w" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.321472 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.342915 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.349800 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/385ea779-f9e4-49c3-acc7-309d7ef1d174-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vgq6w\" (UID: \"385ea779-f9e4-49c3-acc7-309d7ef1d174\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vgq6w" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.363019 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.382741 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.402383 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.411052 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96ebb813-fa46-40e3-b728-147dd064f9d4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fqh9n\" (UID: \"96ebb813-fa46-40e3-b728-147dd064f9d4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fqh9n" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.422132 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.434151 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96ebb813-fa46-40e3-b728-147dd064f9d4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fqh9n\" (UID: \"96ebb813-fa46-40e3-b728-147dd064f9d4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fqh9n" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.442200 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.494578 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpk6k\" (UniqueName: \"kubernetes.io/projected/bbf181f5-152c-4424-9206-9b2981b901ac-kube-api-access-bpk6k\") pod \"route-controller-manager-6576b87f9c-wqk2q\" (UID: \"bbf181f5-152c-4424-9206-9b2981b901ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wqk2q" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.510410 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvgcb\" (UniqueName: \"kubernetes.io/projected/f30745ea-b3b2-4595-82b0-d04d2010e590-kube-api-access-cvgcb\") pod \"console-operator-58897d9998-nbpzn\" (UID: \"f30745ea-b3b2-4595-82b0-d04d2010e590\") " pod="openshift-console-operator/console-operator-58897d9998-nbpzn" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.529127 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9vt7\" (UniqueName: \"kubernetes.io/projected/f9467079-b825-4a3d-b56c-254057a3b5fb-kube-api-access-h9vt7\") pod \"machine-api-operator-5694c8668f-t5772\" (UID: \"f9467079-b825-4a3d-b56c-254057a3b5fb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t5772" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.538906 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz4f6\" (UniqueName: \"kubernetes.io/projected/6aabcff3-3471-4c91-bebe-e91dca530018-kube-api-access-gz4f6\") pod \"openshift-apiserver-operator-796bbdcf4f-x29jp\" (UID: \"6aabcff3-3471-4c91-bebe-e91dca530018\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x29jp" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.547182 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wqk2q" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.561122 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbc6n\" (UniqueName: \"kubernetes.io/projected/1291830b-16cb-4eab-9d73-6edd38a86882-kube-api-access-bbc6n\") pod \"apiserver-76f77b778f-gxktl\" (UID: \"1291830b-16cb-4eab-9d73-6edd38a86882\") " pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.565113 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-t5772" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.579908 4990 request.go:700] Waited for 1.018411952s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa/token Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.585254 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx59g\" (UniqueName: \"kubernetes.io/projected/a567c246-6c6e-4f05-bde4-dc9d9dfc3699-kube-api-access-wx59g\") pod \"downloads-7954f5f757-lwm9p\" (UID: \"a567c246-6c6e-4f05-bde4-dc9d9dfc3699\") " pod="openshift-console/downloads-7954f5f757-lwm9p" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.618035 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6n42\" (UniqueName: \"kubernetes.io/projected/77c0a017-4985-4ba7-bc61-35e7a17f3950-kube-api-access-z6n42\") pod \"apiserver-7bbb656c7d-qqplk\" (UID: \"77c0a017-4985-4ba7-bc61-35e7a17f3950\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.624753 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.630366 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f22ld\" (UniqueName: \"kubernetes.io/projected/870dfc46-9efe-4184-8bf9-7c8a6a70f6e9-kube-api-access-f22ld\") pod \"cluster-samples-operator-665b6dd947-j5hlb\" (UID: \"870dfc46-9efe-4184-8bf9-7c8a6a70f6e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5hlb" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.630409 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/60dfec50-dbbe-4c06-adfe-bd82cbcc8ef7-images\") pod \"machine-config-operator-74547568cd-wdvxr\" (UID: \"60dfec50-dbbe-4c06-adfe-bd82cbcc8ef7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdvxr" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.632011 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.634958 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5hlb" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.641436 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-nbpzn" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.643173 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.652237 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x29jp" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.655690 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/60dfec50-dbbe-4c06-adfe-bd82cbcc8ef7-proxy-tls\") pod \"machine-config-operator-74547568cd-wdvxr\" (UID: \"60dfec50-dbbe-4c06-adfe-bd82cbcc8ef7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdvxr" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.661835 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-lwm9p" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.683713 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.686757 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnhcw\" (UniqueName: \"kubernetes.io/projected/ad32df98-dc0d-4b57-b02b-b04aa0d9db65-kube-api-access-rnhcw\") pod \"dns-operator-744455d44c-rnwbq\" (UID: \"ad32df98-dc0d-4b57-b02b-b04aa0d9db65\") " pod="openshift-dns-operator/dns-operator-744455d44c-rnwbq" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.702305 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.723095 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.734931 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9b3d56bb-86a8-44d1-a8b6-4d669458e1e5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-glp4b\" (UID: \"9b3d56bb-86a8-44d1-a8b6-4d669458e1e5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-glp4b" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.742072 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.763647 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 05 01:10:39 crc kubenswrapper[4990]: E1205 01:10:39.768301 4990 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 05 01:10:39 crc kubenswrapper[4990]: E1205 01:10:39.768369 4990 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 05 01:10:39 crc kubenswrapper[4990]: E1205 01:10:39.768423 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ea184ab-2c2a-4fbc-8598-01783702463f-package-server-manager-serving-cert podName:9ea184ab-2c2a-4fbc-8598-01783702463f nodeName:}" failed. No retries permitted until 2025-12-05 01:10:40.268395823 +0000 UTC m=+138.644611184 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/9ea184ab-2c2a-4fbc-8598-01783702463f-package-server-manager-serving-cert") pod "package-server-manager-789f6589d5-5thbl" (UID: "9ea184ab-2c2a-4fbc-8598-01783702463f") : failed to sync secret cache: timed out waiting for the condition Dec 05 01:10:39 crc kubenswrapper[4990]: E1205 01:10:39.768471 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8ee3821-a917-4dba-80f5-f7ad854541cf-srv-cert podName:f8ee3821-a917-4dba-80f5-f7ad854541cf nodeName:}" failed. No retries permitted until 2025-12-05 01:10:40.268437464 +0000 UTC m=+138.644652835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/f8ee3821-a917-4dba-80f5-f7ad854541cf-srv-cert") pod "catalog-operator-68c6474976-ggfl6" (UID: "f8ee3821-a917-4dba-80f5-f7ad854541cf") : failed to sync secret cache: timed out waiting for the condition Dec 05 01:10:39 crc kubenswrapper[4990]: E1205 01:10:39.770627 4990 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Dec 05 01:10:39 crc kubenswrapper[4990]: E1205 01:10:39.770703 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8ee3821-a917-4dba-80f5-f7ad854541cf-profile-collector-cert podName:f8ee3821-a917-4dba-80f5-f7ad854541cf nodeName:}" failed. No retries permitted until 2025-12-05 01:10:40.270690062 +0000 UTC m=+138.646905443 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/f8ee3821-a917-4dba-80f5-f7ad854541cf-profile-collector-cert") pod "catalog-operator-68c6474976-ggfl6" (UID: "f8ee3821-a917-4dba-80f5-f7ad854541cf") : failed to sync secret cache: timed out waiting for the condition Dec 05 01:10:39 crc kubenswrapper[4990]: E1205 01:10:39.770979 4990 secret.go:188] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Dec 05 01:10:39 crc kubenswrapper[4990]: E1205 01:10:39.771018 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97c12d0b-8415-47ff-a3cf-c8e906620182-proxy-tls podName:97c12d0b-8415-47ff-a3cf-c8e906620182 nodeName:}" failed. No retries permitted until 2025-12-05 01:10:40.271007841 +0000 UTC m=+138.647223212 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/97c12d0b-8415-47ff-a3cf-c8e906620182-proxy-tls") pod "machine-config-controller-84d6567774-vm68w" (UID: "97c12d0b-8415-47ff-a3cf-c8e906620182") : failed to sync secret cache: timed out waiting for the condition Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.790011 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.802911 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.821506 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.842143 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.848834 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wqk2q"] Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.863617 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.883179 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.891964 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.902424 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.911894 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-rnwbq" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.922602 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.941518 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.961809 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nbpzn"] Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.962019 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 05 01:10:39 crc kubenswrapper[4990]: I1205 01:10:39.989339 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.009014 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.013076 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-t5772"] Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.014550 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-lwm9p"] Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.022884 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.061560 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.074962 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk"] Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.082684 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.101917 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.110766 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5hlb"] Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.123184 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gxktl"] Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.124717 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.142009 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.161836 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 05 01:10:40 crc kubenswrapper[4990]: W1205 01:10:40.175221 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9467079_b825_4a3d_b56c_254057a3b5fb.slice/crio-0a71a98f1441511f38242a529b2f80b175c1ea07ce1a0a7bced0942109c9f7f8 WatchSource:0}: Error finding container 0a71a98f1441511f38242a529b2f80b175c1ea07ce1a0a7bced0942109c9f7f8: Status 404 returned error can't find the container with id 0a71a98f1441511f38242a529b2f80b175c1ea07ce1a0a7bced0942109c9f7f8 Dec 05 01:10:40 crc kubenswrapper[4990]: W1205 01:10:40.176871 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda567c246_6c6e_4f05_bde4_dc9d9dfc3699.slice/crio-93adcc6439baa9e846b594d9d64b210c63ea9d52090d011f5f2a1a58512ef806 WatchSource:0}: Error finding container 93adcc6439baa9e846b594d9d64b210c63ea9d52090d011f5f2a1a58512ef806: Status 404 returned error can't find the container with id 93adcc6439baa9e846b594d9d64b210c63ea9d52090d011f5f2a1a58512ef806 Dec 05 01:10:40 crc kubenswrapper[4990]: W1205 01:10:40.178647 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77c0a017_4985_4ba7_bc61_35e7a17f3950.slice/crio-b638bbfc95056508b2f19f21f7d6ff770a1e9ee17d19de733e31ae57222826f4 WatchSource:0}: Error finding container b638bbfc95056508b2f19f21f7d6ff770a1e9ee17d19de733e31ae57222826f4: Status 404 returned error can't find the container with id b638bbfc95056508b2f19f21f7d6ff770a1e9ee17d19de733e31ae57222826f4 Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.181101 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 05 01:10:40 crc kubenswrapper[4990]: W1205 01:10:40.186415 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1291830b_16cb_4eab_9d73_6edd38a86882.slice/crio-2a4e1ab6eae4638a7a137a0d6302dbbfa6de4051227b3424282147da9cd3e268 WatchSource:0}: Error finding container 2a4e1ab6eae4638a7a137a0d6302dbbfa6de4051227b3424282147da9cd3e268: Status 404 returned error can't find the container with id 2a4e1ab6eae4638a7a137a0d6302dbbfa6de4051227b3424282147da9cd3e268 Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.202590 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.204649 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x29jp"] Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.227323 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.244419 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.261725 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.281747 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.300588 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f8ee3821-a917-4dba-80f5-f7ad854541cf-srv-cert\") pod \"catalog-operator-68c6474976-ggfl6\" (UID: \"f8ee3821-a917-4dba-80f5-f7ad854541cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ggfl6" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.300628 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ea184ab-2c2a-4fbc-8598-01783702463f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5thbl\" (UID: \"9ea184ab-2c2a-4fbc-8598-01783702463f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5thbl" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.300775 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/97c12d0b-8415-47ff-a3cf-c8e906620182-proxy-tls\") pod \"machine-config-controller-84d6567774-vm68w\" (UID: \"97c12d0b-8415-47ff-a3cf-c8e906620182\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vm68w" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.300799 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f8ee3821-a917-4dba-80f5-f7ad854541cf-profile-collector-cert\") pod \"catalog-operator-68c6474976-ggfl6\" (UID: \"f8ee3821-a917-4dba-80f5-f7ad854541cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ggfl6" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.307117 4990 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.309436 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ea184ab-2c2a-4fbc-8598-01783702463f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5thbl\" (UID: \"9ea184ab-2c2a-4fbc-8598-01783702463f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5thbl" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.311583 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/97c12d0b-8415-47ff-a3cf-c8e906620182-proxy-tls\") pod \"machine-config-controller-84d6567774-vm68w\" (UID: \"97c12d0b-8415-47ff-a3cf-c8e906620182\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vm68w" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.311637 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f8ee3821-a917-4dba-80f5-f7ad854541cf-profile-collector-cert\") pod \"catalog-operator-68c6474976-ggfl6\" (UID: \"f8ee3821-a917-4dba-80f5-f7ad854541cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ggfl6" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.314555 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f8ee3821-a917-4dba-80f5-f7ad854541cf-srv-cert\") pod \"catalog-operator-68c6474976-ggfl6\" (UID: \"f8ee3821-a917-4dba-80f5-f7ad854541cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ggfl6" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.321273 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.321923 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rnwbq"] Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.341151 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.361923 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.384118 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.402240 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.439169 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcl84\" (UniqueName: \"kubernetes.io/projected/b8bb3b38-72ab-4295-8b62-99f5f424c711-kube-api-access-zcl84\") pod \"console-f9d7485db-g6z24\" (UID: \"b8bb3b38-72ab-4295-8b62-99f5f424c711\") " pod="openshift-console/console-f9d7485db-g6z24" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.442316 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.461325 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.482039 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.519196 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bgn9\" (UniqueName: \"kubernetes.io/projected/526c34da-f910-47e3-bccc-eff5e2fb7b59-kube-api-access-5bgn9\") pod \"etcd-operator-b45778765-lj2c6\" (UID: \"526c34da-f910-47e3-bccc-eff5e2fb7b59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lj2c6" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.536446 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncmmn\" (UniqueName: \"kubernetes.io/projected/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-kube-api-access-ncmmn\") pod \"oauth-openshift-558db77b4-lp5lw\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.562870 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk9m2\" (UniqueName: \"kubernetes.io/projected/13a6b9e4-2ddb-4f53-889d-484647055582-kube-api-access-qk9m2\") pod \"openshift-controller-manager-operator-756b6f6bc6-jx8km\" (UID: \"13a6b9e4-2ddb-4f53-889d-484647055582\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jx8km" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.580023 4990 request.go:700] Waited for 1.917211641s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/serviceaccounts/ingress-operator/token Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.613235 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvfwg\" (UniqueName: \"kubernetes.io/projected/af7e46a8-0740-4f17-9d89-97eb924a5d39-kube-api-access-xvfwg\") pod \"ingress-operator-5b745b69d9-vq4v6\" (UID: \"af7e46a8-0740-4f17-9d89-97eb924a5d39\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq4v6" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.613904 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jx8km" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.628321 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjvvv\" (UniqueName: \"kubernetes.io/projected/1a0a0305-99f9-45d5-b298-383c5f6cc4f6-kube-api-access-rjvvv\") pod \"control-plane-machine-set-operator-78cbb6b69f-7rszv\" (UID: \"1a0a0305-99f9-45d5-b298-383c5f6cc4f6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7rszv" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.638407 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/af7e46a8-0740-4f17-9d89-97eb924a5d39-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vq4v6\" (UID: \"af7e46a8-0740-4f17-9d89-97eb924a5d39\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq4v6" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.648097 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.663222 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/be821014-926b-4b47-a347-e778ef1d085a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dhllr\" (UID: \"be821014-926b-4b47-a347-e778ef1d085a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dhllr" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.691666 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj7j6\" (UniqueName: \"kubernetes.io/projected/e00f3a37-11c1-4862-b7db-c324afcf2214-kube-api-access-mj7j6\") pod \"machine-approver-56656f9798-42lhm\" (UID: \"e00f3a37-11c1-4862-b7db-c324afcf2214\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-42lhm" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.699082 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28kqx\" (UniqueName: \"kubernetes.io/projected/be821014-926b-4b47-a347-e778ef1d085a-kube-api-access-28kqx\") pod \"cluster-image-registry-operator-dc59b4c8b-dhllr\" (UID: \"be821014-926b-4b47-a347-e778ef1d085a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dhllr" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.700787 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-lj2c6" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.708050 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-g6z24" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.716735 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dhllr" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.722180 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1be72bab-0eb7-4dc5-bc23-9f2eb261d76c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ztdtk\" (UID: \"1be72bab-0eb7-4dc5-bc23-9f2eb261d76c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ztdtk" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.722198 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wqk2q" event={"ID":"bbf181f5-152c-4424-9206-9b2981b901ac","Type":"ContainerStarted","Data":"f246c3f5ad0bb902e2b86a905ce4d6c49020e27514ffae59d13c20d17cfafa75"} Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.722240 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wqk2q" event={"ID":"bbf181f5-152c-4424-9206-9b2981b901ac","Type":"ContainerStarted","Data":"534d6d4686068a9089890bdf5f807271f5508cc68241c76e4abd6fc1fc9767b1"} Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.722754 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wqk2q" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.723655 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.725388 4990 generic.go:334] "Generic (PLEG): container finished" podID="1291830b-16cb-4eab-9d73-6edd38a86882" containerID="cc58ced28b12d7121d5867e329c65a147287c6b55528a25632a86238cc455aa0" exitCode=0 Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.725801 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gxktl" event={"ID":"1291830b-16cb-4eab-9d73-6edd38a86882","Type":"ContainerDied","Data":"cc58ced28b12d7121d5867e329c65a147287c6b55528a25632a86238cc455aa0"} Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.726575 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gxktl" event={"ID":"1291830b-16cb-4eab-9d73-6edd38a86882","Type":"ContainerStarted","Data":"2a4e1ab6eae4638a7a137a0d6302dbbfa6de4051227b3424282147da9cd3e268"} Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.731544 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-lwm9p" event={"ID":"a567c246-6c6e-4f05-bde4-dc9d9dfc3699","Type":"ContainerStarted","Data":"a9e933d36c2f4e534c3ab0e4f97f0a792df7adf397241aee689ef6ead40e8d1a"} Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.731668 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-lwm9p" event={"ID":"a567c246-6c6e-4f05-bde4-dc9d9dfc3699","Type":"ContainerStarted","Data":"93adcc6439baa9e846b594d9d64b210c63ea9d52090d011f5f2a1a58512ef806"} Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.731806 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-lwm9p" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.735359 4990 patch_prober.go:28] interesting pod/downloads-7954f5f757-lwm9p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.735414 4990 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lwm9p" podUID="a567c246-6c6e-4f05-bde4-dc9d9dfc3699" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.737870 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5hlb" event={"ID":"870dfc46-9efe-4184-8bf9-7c8a6a70f6e9","Type":"ContainerStarted","Data":"d53d335aee0dcdc429ae889ce87e611330ade949a5b66b972af166e4a682c599"} Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.755228 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5hlb" event={"ID":"870dfc46-9efe-4184-8bf9-7c8a6a70f6e9","Type":"ContainerStarted","Data":"8f88069807bdcdb9b702a6066b640722eb8640eccacce230491e84efb602e766"} Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.755258 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5hlb" event={"ID":"870dfc46-9efe-4184-8bf9-7c8a6a70f6e9","Type":"ContainerStarted","Data":"1bbd587922e90c04efd289800c40d890bb4884b894ceb2b4612d8d7a43482ce1"} Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.761923 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.771032 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-t5772" event={"ID":"f9467079-b825-4a3d-b56c-254057a3b5fb","Type":"ContainerStarted","Data":"7d30298efe5955a372c713eef44b6cd6bd4a8ef9ef4d8a34444b4caedc0a7d7f"} Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.771179 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-t5772" event={"ID":"f9467079-b825-4a3d-b56c-254057a3b5fb","Type":"ContainerStarted","Data":"c4dd90ce45814364c585463f5cb8d70b776ef6d8290dfe11b5268cea7005a15d"} Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.771241 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-t5772" event={"ID":"f9467079-b825-4a3d-b56c-254057a3b5fb","Type":"ContainerStarted","Data":"0a71a98f1441511f38242a529b2f80b175c1ea07ce1a0a7bced0942109c9f7f8"} Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.777366 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.784210 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.791083 4990 generic.go:334] "Generic (PLEG): container finished" podID="77c0a017-4985-4ba7-bc61-35e7a17f3950" containerID="222a47736dda4dc848fdbf5f696d8c19509744fc666b17196d4aa8350091f505" exitCode=0 Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.791166 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk" event={"ID":"77c0a017-4985-4ba7-bc61-35e7a17f3950","Type":"ContainerDied","Data":"222a47736dda4dc848fdbf5f696d8c19509744fc666b17196d4aa8350091f505"} Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.791205 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk" event={"ID":"77c0a017-4985-4ba7-bc61-35e7a17f3950","Type":"ContainerStarted","Data":"b638bbfc95056508b2f19f21f7d6ff770a1e9ee17d19de733e31ae57222826f4"} Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.797985 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-nbpzn" event={"ID":"f30745ea-b3b2-4595-82b0-d04d2010e590","Type":"ContainerStarted","Data":"c6daa73a2fc1636cab931796d6180902b48fee901bd5b33b804dc9f5cdcd001b"} Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.798022 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-nbpzn" event={"ID":"f30745ea-b3b2-4595-82b0-d04d2010e590","Type":"ContainerStarted","Data":"9220a6084c502dc42f5380bba922c515c98210e3abce8758e79da83d89d4c9b1"} Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.799962 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-nbpzn" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.809007 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rnwbq" event={"ID":"ad32df98-dc0d-4b57-b02b-b04aa0d9db65","Type":"ContainerStarted","Data":"59a1d6a894f4901fda3191624bdd5498044934f2da7c0bd5373b99bf0b483bc1"} Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.814382 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x29jp" event={"ID":"6aabcff3-3471-4c91-bebe-e91dca530018","Type":"ContainerStarted","Data":"70622cc248a298b5e4f4bef4068c91c257396e04a37d7f7d067d88c0a3bb4ad1"} Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.814452 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x29jp" event={"ID":"6aabcff3-3471-4c91-bebe-e91dca530018","Type":"ContainerStarted","Data":"edb16350acc9fd2d7b101fdf374dbddd9283d4b864e42ddaeeaf42e5ee133d5e"} Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.828005 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zlwc\" (UniqueName: \"kubernetes.io/projected/1f42168c-ddac-4d6d-a6c3-d8b3d2beeb6d-kube-api-access-4zlwc\") pod \"migrator-59844c95c7-qxd47\" (UID: \"1f42168c-ddac-4d6d-a6c3-d8b3d2beeb6d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qxd47" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.839605 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jx8km"] Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.847546 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/385ea779-f9e4-49c3-acc7-309d7ef1d174-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vgq6w\" (UID: \"385ea779-f9e4-49c3-acc7-309d7ef1d174\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vgq6w" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.867703 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzcdc\" (UniqueName: \"kubernetes.io/projected/96ebb813-fa46-40e3-b728-147dd064f9d4-kube-api-access-lzcdc\") pod \"kube-storage-version-migrator-operator-b67b599dd-fqh9n\" (UID: \"96ebb813-fa46-40e3-b728-147dd064f9d4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fqh9n" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.891757 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7rszv" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.899294 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfqpl\" (UniqueName: \"kubernetes.io/projected/d272e8c9-2d62-4783-94bb-a6a997e08c46-kube-api-access-nfqpl\") pod \"router-default-5444994796-slxt5\" (UID: \"d272e8c9-2d62-4783-94bb-a6a997e08c46\") " pod="openshift-ingress/router-default-5444994796-slxt5" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.902232 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wqk2q" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.909084 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-264xj\" (UniqueName: \"kubernetes.io/projected/60dfec50-dbbe-4c06-adfe-bd82cbcc8ef7-kube-api-access-264xj\") pod \"machine-config-operator-74547568cd-wdvxr\" (UID: \"60dfec50-dbbe-4c06-adfe-bd82cbcc8ef7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdvxr" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.924664 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqv26\" (UniqueName: \"kubernetes.io/projected/97c12d0b-8415-47ff-a3cf-c8e906620182-kube-api-access-rqv26\") pod \"machine-config-controller-84d6567774-vm68w\" (UID: \"97c12d0b-8415-47ff-a3cf-c8e906620182\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vm68w" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.933260 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq4v6" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.954295 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr47c\" (UniqueName: \"kubernetes.io/projected/73009b29-5f92-4552-969c-669c459575ae-kube-api-access-jr47c\") pod \"controller-manager-879f6c89f-2wzr5\" (UID: \"73009b29-5f92-4552-969c-669c459575ae\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2wzr5" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.957865 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lp5lw"] Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.961169 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-42lhm" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.962306 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkfd5\" (UniqueName: \"kubernetes.io/projected/f8ee3821-a917-4dba-80f5-f7ad854541cf-kube-api-access-rkfd5\") pod \"catalog-operator-68c6474976-ggfl6\" (UID: \"f8ee3821-a917-4dba-80f5-f7ad854541cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ggfl6" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.989957 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jns4b\" (UniqueName: \"kubernetes.io/projected/9b3d56bb-86a8-44d1-a8b6-4d669458e1e5-kube-api-access-jns4b\") pod \"multus-admission-controller-857f4d67dd-glp4b\" (UID: \"9b3d56bb-86a8-44d1-a8b6-4d669458e1e5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-glp4b" Dec 05 01:10:40 crc kubenswrapper[4990]: I1205 01:10:40.999268 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ztdtk" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.003118 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwtst\" (UniqueName: \"kubernetes.io/projected/9ea184ab-2c2a-4fbc-8598-01783702463f-kube-api-access-mwtst\") pod \"package-server-manager-789f6589d5-5thbl\" (UID: \"9ea184ab-2c2a-4fbc-8598-01783702463f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5thbl" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.003550 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.021817 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.043947 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.044234 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qxd47" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.052868 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-slxt5" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.055646 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dhllr"] Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.062902 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.107612 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2wzr5" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.113171 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vgq6w" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.114028 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-lj2c6"] Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.119886 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fqh9n" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.120318 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fde7ef59-700e-49a8-87f5-eac2580a1a54-trusted-ca\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.120396 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8f0a906-45b8-4b8e-92b6-bfa0bb540fba-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mxfcm\" (UID: \"f8f0a906-45b8-4b8e-92b6-bfa0bb540fba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mxfcm" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.120434 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqvq5\" (UniqueName: \"kubernetes.io/projected/2d094a74-23ff-48bd-a4eb-b5f6e7a1fbe7-kube-api-access-fqvq5\") pod \"authentication-operator-69f744f599-sxmsc\" (UID: \"2d094a74-23ff-48bd-a4eb-b5f6e7a1fbe7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sxmsc" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.120500 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.120525 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b0a3d34b-f752-4335-99b7-60fa17cde89f-webhook-cert\") pod \"packageserver-d55dfcdfc-f5g9m\" (UID: \"b0a3d34b-f752-4335-99b7-60fa17cde89f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f5g9m" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.120557 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fde7ef59-700e-49a8-87f5-eac2580a1a54-bound-sa-token\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.120576 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6a28fbd4-a08e-4189-936c-f1a544953752-srv-cert\") pod \"olm-operator-6b444d44fb-zvbs8\" (UID: \"6a28fbd4-a08e-4189-936c-f1a544953752\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zvbs8" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.120595 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36a21a76-dd28-45fd-b4c6-4220c7c83410-serving-cert\") pod \"openshift-config-operator-7777fb866f-lxk9k\" (UID: \"36a21a76-dd28-45fd-b4c6-4220c7c83410\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lxk9k" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.120663 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d094a74-23ff-48bd-a4eb-b5f6e7a1fbe7-service-ca-bundle\") pod \"authentication-operator-69f744f599-sxmsc\" (UID: \"2d094a74-23ff-48bd-a4eb-b5f6e7a1fbe7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sxmsc" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.120682 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fde7ef59-700e-49a8-87f5-eac2580a1a54-registry-tls\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.120715 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d094a74-23ff-48bd-a4eb-b5f6e7a1fbe7-serving-cert\") pod \"authentication-operator-69f744f599-sxmsc\" (UID: \"2d094a74-23ff-48bd-a4eb-b5f6e7a1fbe7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sxmsc" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.120733 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5z6f\" (UniqueName: \"kubernetes.io/projected/36a21a76-dd28-45fd-b4c6-4220c7c83410-kube-api-access-d5z6f\") pod \"openshift-config-operator-7777fb866f-lxk9k\" (UID: \"36a21a76-dd28-45fd-b4c6-4220c7c83410\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lxk9k" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.120749 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b0a3d34b-f752-4335-99b7-60fa17cde89f-apiservice-cert\") pod \"packageserver-d55dfcdfc-f5g9m\" (UID: \"b0a3d34b-f752-4335-99b7-60fa17cde89f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f5g9m" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.120801 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drvpg\" (UniqueName: \"kubernetes.io/projected/6a28fbd4-a08e-4189-936c-f1a544953752-kube-api-access-drvpg\") pod \"olm-operator-6b444d44fb-zvbs8\" (UID: \"6a28fbd4-a08e-4189-936c-f1a544953752\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zvbs8" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.120899 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b0a3d34b-f752-4335-99b7-60fa17cde89f-tmpfs\") pod \"packageserver-d55dfcdfc-f5g9m\" (UID: \"b0a3d34b-f752-4335-99b7-60fa17cde89f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f5g9m" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.120916 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fde7ef59-700e-49a8-87f5-eac2580a1a54-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.120931 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fde7ef59-700e-49a8-87f5-eac2580a1a54-registry-certificates\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.120957 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8f0a906-45b8-4b8e-92b6-bfa0bb540fba-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mxfcm\" (UID: \"f8f0a906-45b8-4b8e-92b6-bfa0bb540fba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mxfcm" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.120978 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/36a21a76-dd28-45fd-b4c6-4220c7c83410-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lxk9k\" (UID: \"36a21a76-dd28-45fd-b4c6-4220c7c83410\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lxk9k" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.123533 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8f0a906-45b8-4b8e-92b6-bfa0bb540fba-config\") pod \"kube-apiserver-operator-766d6c64bb-mxfcm\" (UID: \"f8f0a906-45b8-4b8e-92b6-bfa0bb540fba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mxfcm" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.123595 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fde7ef59-700e-49a8-87f5-eac2580a1a54-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.123616 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmt5k\" (UniqueName: \"kubernetes.io/projected/b0a3d34b-f752-4335-99b7-60fa17cde89f-kube-api-access-bmt5k\") pod \"packageserver-d55dfcdfc-f5g9m\" (UID: \"b0a3d34b-f752-4335-99b7-60fa17cde89f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f5g9m" Dec 05 01:10:41 crc kubenswrapper[4990]: E1205 01:10:41.123700 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:41.623686558 +0000 UTC m=+139.999901919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.124088 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6a28fbd4-a08e-4189-936c-f1a544953752-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zvbs8\" (UID: \"6a28fbd4-a08e-4189-936c-f1a544953752\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zvbs8" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.124118 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tswqd\" (UniqueName: \"kubernetes.io/projected/fde7ef59-700e-49a8-87f5-eac2580a1a54-kube-api-access-tswqd\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.124142 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d094a74-23ff-48bd-a4eb-b5f6e7a1fbe7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-sxmsc\" (UID: \"2d094a74-23ff-48bd-a4eb-b5f6e7a1fbe7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sxmsc" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.124166 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d094a74-23ff-48bd-a4eb-b5f6e7a1fbe7-config\") pod \"authentication-operator-69f744f599-sxmsc\" (UID: \"2d094a74-23ff-48bd-a4eb-b5f6e7a1fbe7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sxmsc" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.130799 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdvxr" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.138952 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-glp4b" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.147465 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ggfl6" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.159277 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vm68w" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.180996 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5thbl" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.204648 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-g6z24"] Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.232097 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.232358 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvv8x\" (UniqueName: \"kubernetes.io/projected/c8ea14d4-f00e-4b39-a6f5-1fbf347ceeb1-kube-api-access-mvv8x\") pod \"ingress-canary-mn2kc\" (UID: \"c8ea14d4-f00e-4b39-a6f5-1fbf347ceeb1\") " pod="openshift-ingress-canary/ingress-canary-mn2kc" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.232427 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b0a3d34b-f752-4335-99b7-60fa17cde89f-tmpfs\") pod \"packageserver-d55dfcdfc-f5g9m\" (UID: \"b0a3d34b-f752-4335-99b7-60fa17cde89f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f5g9m" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.232450 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fde7ef59-700e-49a8-87f5-eac2580a1a54-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.232473 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f901191e-752f-4cca-bf08-3274cf6a9254-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nkp9t\" (UID: \"f901191e-752f-4cca-bf08-3274cf6a9254\") " pod="openshift-marketplace/marketplace-operator-79b997595-nkp9t" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.232519 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fde7ef59-700e-49a8-87f5-eac2580a1a54-registry-certificates\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.232546 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cbdeca6b-cc7e-4f93-80cf-c5cfc1b18904-metrics-tls\") pod \"dns-default-5jm9j\" (UID: \"cbdeca6b-cc7e-4f93-80cf-c5cfc1b18904\") " pod="openshift-dns/dns-default-5jm9j" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.232601 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8f0a906-45b8-4b8e-92b6-bfa0bb540fba-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mxfcm\" (UID: \"f8f0a906-45b8-4b8e-92b6-bfa0bb540fba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mxfcm" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.232631 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/36a21a76-dd28-45fd-b4c6-4220c7c83410-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lxk9k\" (UID: \"36a21a76-dd28-45fd-b4c6-4220c7c83410\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lxk9k" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.232740 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v58s\" (UniqueName: \"kubernetes.io/projected/e3b21d39-3456-4a12-a91b-459864e74087-kube-api-access-6v58s\") pod \"collect-profiles-29414940-6mmxx\" (UID: \"e3b21d39-3456-4a12-a91b-459864e74087\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414940-6mmxx" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.232759 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1a4b043f-328d-4f7f-9010-6df08452fdc7-certs\") pod \"machine-config-server-7qlxt\" (UID: \"1a4b043f-328d-4f7f-9010-6df08452fdc7\") " pod="openshift-machine-config-operator/machine-config-server-7qlxt" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.232789 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhcpl\" (UniqueName: \"kubernetes.io/projected/cbdeca6b-cc7e-4f93-80cf-c5cfc1b18904-kube-api-access-fhcpl\") pod \"dns-default-5jm9j\" (UID: \"cbdeca6b-cc7e-4f93-80cf-c5cfc1b18904\") " pod="openshift-dns/dns-default-5jm9j" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.232852 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fde7ef59-700e-49a8-87f5-eac2580a1a54-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.232871 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8f0a906-45b8-4b8e-92b6-bfa0bb540fba-config\") pod \"kube-apiserver-operator-766d6c64bb-mxfcm\" (UID: \"f8f0a906-45b8-4b8e-92b6-bfa0bb540fba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mxfcm" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.232892 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsnqj\" (UniqueName: \"kubernetes.io/projected/61a82c3b-7baa-448b-a9c6-4647454a2850-kube-api-access-dsnqj\") pod \"service-ca-operator-777779d784-zp4n6\" (UID: \"61a82c3b-7baa-448b-a9c6-4647454a2850\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zp4n6" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.232911 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmt5k\" (UniqueName: \"kubernetes.io/projected/b0a3d34b-f752-4335-99b7-60fa17cde89f-kube-api-access-bmt5k\") pod \"packageserver-d55dfcdfc-f5g9m\" (UID: \"b0a3d34b-f752-4335-99b7-60fa17cde89f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f5g9m" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.232931 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cbdeca6b-cc7e-4f93-80cf-c5cfc1b18904-config-volume\") pod \"dns-default-5jm9j\" (UID: \"cbdeca6b-cc7e-4f93-80cf-c5cfc1b18904\") " pod="openshift-dns/dns-default-5jm9j" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.232951 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b54b872d-8289-4d13-85e5-af8db7a35e8f-signing-key\") pod \"service-ca-9c57cc56f-gdsdh\" (UID: \"b54b872d-8289-4d13-85e5-af8db7a35e8f\") " pod="openshift-service-ca/service-ca-9c57cc56f-gdsdh" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.232973 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3b21d39-3456-4a12-a91b-459864e74087-secret-volume\") pod \"collect-profiles-29414940-6mmxx\" (UID: \"e3b21d39-3456-4a12-a91b-459864e74087\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414940-6mmxx" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.232991 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61a82c3b-7baa-448b-a9c6-4647454a2850-serving-cert\") pod \"service-ca-operator-777779d784-zp4n6\" (UID: \"61a82c3b-7baa-448b-a9c6-4647454a2850\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zp4n6" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.233205 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/dea57213-02e7-4a09-adcd-a67306d41b54-plugins-dir\") pod \"csi-hostpathplugin-6b99n\" (UID: \"dea57213-02e7-4a09-adcd-a67306d41b54\") " pod="hostpath-provisioner/csi-hostpathplugin-6b99n" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.233229 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1a4b043f-328d-4f7f-9010-6df08452fdc7-node-bootstrap-token\") pod \"machine-config-server-7qlxt\" (UID: \"1a4b043f-328d-4f7f-9010-6df08452fdc7\") " pod="openshift-machine-config-operator/machine-config-server-7qlxt" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.233268 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6a28fbd4-a08e-4189-936c-f1a544953752-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zvbs8\" (UID: \"6a28fbd4-a08e-4189-936c-f1a544953752\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zvbs8" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.233290 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/dea57213-02e7-4a09-adcd-a67306d41b54-csi-data-dir\") pod \"csi-hostpathplugin-6b99n\" (UID: \"dea57213-02e7-4a09-adcd-a67306d41b54\") " pod="hostpath-provisioner/csi-hostpathplugin-6b99n" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.233330 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tswqd\" (UniqueName: \"kubernetes.io/projected/fde7ef59-700e-49a8-87f5-eac2580a1a54-kube-api-access-tswqd\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.233356 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d094a74-23ff-48bd-a4eb-b5f6e7a1fbe7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-sxmsc\" (UID: \"2d094a74-23ff-48bd-a4eb-b5f6e7a1fbe7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sxmsc" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.233427 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d094a74-23ff-48bd-a4eb-b5f6e7a1fbe7-config\") pod \"authentication-operator-69f744f599-sxmsc\" (UID: \"2d094a74-23ff-48bd-a4eb-b5f6e7a1fbe7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sxmsc" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.233471 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dea57213-02e7-4a09-adcd-a67306d41b54-registration-dir\") pod \"csi-hostpathplugin-6b99n\" (UID: \"dea57213-02e7-4a09-adcd-a67306d41b54\") " pod="hostpath-provisioner/csi-hostpathplugin-6b99n" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.233512 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f901191e-752f-4cca-bf08-3274cf6a9254-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nkp9t\" (UID: \"f901191e-752f-4cca-bf08-3274cf6a9254\") " pod="openshift-marketplace/marketplace-operator-79b997595-nkp9t" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.233536 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fde7ef59-700e-49a8-87f5-eac2580a1a54-trusted-ca\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.233677 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8f0a906-45b8-4b8e-92b6-bfa0bb540fba-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mxfcm\" (UID: \"f8f0a906-45b8-4b8e-92b6-bfa0bb540fba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mxfcm" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.233725 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqvq5\" (UniqueName: \"kubernetes.io/projected/2d094a74-23ff-48bd-a4eb-b5f6e7a1fbe7-kube-api-access-fqvq5\") pod \"authentication-operator-69f744f599-sxmsc\" (UID: \"2d094a74-23ff-48bd-a4eb-b5f6e7a1fbe7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sxmsc" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.233843 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b0a3d34b-f752-4335-99b7-60fa17cde89f-webhook-cert\") pod \"packageserver-d55dfcdfc-f5g9m\" (UID: \"b0a3d34b-f752-4335-99b7-60fa17cde89f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f5g9m" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.233884 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7jv2\" (UniqueName: \"kubernetes.io/projected/dea57213-02e7-4a09-adcd-a67306d41b54-kube-api-access-z7jv2\") pod \"csi-hostpathplugin-6b99n\" (UID: \"dea57213-02e7-4a09-adcd-a67306d41b54\") " pod="hostpath-provisioner/csi-hostpathplugin-6b99n" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.233949 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fde7ef59-700e-49a8-87f5-eac2580a1a54-bound-sa-token\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.233996 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36a21a76-dd28-45fd-b4c6-4220c7c83410-serving-cert\") pod \"openshift-config-operator-7777fb866f-lxk9k\" (UID: \"36a21a76-dd28-45fd-b4c6-4220c7c83410\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lxk9k" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.234016 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/dea57213-02e7-4a09-adcd-a67306d41b54-mountpoint-dir\") pod \"csi-hostpathplugin-6b99n\" (UID: \"dea57213-02e7-4a09-adcd-a67306d41b54\") " pod="hostpath-provisioner/csi-hostpathplugin-6b99n" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.234033 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61a82c3b-7baa-448b-a9c6-4647454a2850-config\") pod \"service-ca-operator-777779d784-zp4n6\" (UID: \"61a82c3b-7baa-448b-a9c6-4647454a2850\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zp4n6" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.234053 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9846c\" (UniqueName: \"kubernetes.io/projected/f901191e-752f-4cca-bf08-3274cf6a9254-kube-api-access-9846c\") pod \"marketplace-operator-79b997595-nkp9t\" (UID: \"f901191e-752f-4cca-bf08-3274cf6a9254\") " pod="openshift-marketplace/marketplace-operator-79b997595-nkp9t" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.234070 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6a28fbd4-a08e-4189-936c-f1a544953752-srv-cert\") pod \"olm-operator-6b444d44fb-zvbs8\" (UID: \"6a28fbd4-a08e-4189-936c-f1a544953752\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zvbs8" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.234104 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dea57213-02e7-4a09-adcd-a67306d41b54-socket-dir\") pod \"csi-hostpathplugin-6b99n\" (UID: \"dea57213-02e7-4a09-adcd-a67306d41b54\") " pod="hostpath-provisioner/csi-hostpathplugin-6b99n" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.234121 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd6gp\" (UniqueName: \"kubernetes.io/projected/1a4b043f-328d-4f7f-9010-6df08452fdc7-kube-api-access-sd6gp\") pod \"machine-config-server-7qlxt\" (UID: \"1a4b043f-328d-4f7f-9010-6df08452fdc7\") " pod="openshift-machine-config-operator/machine-config-server-7qlxt" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.234152 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8ea14d4-f00e-4b39-a6f5-1fbf347ceeb1-cert\") pod \"ingress-canary-mn2kc\" (UID: \"c8ea14d4-f00e-4b39-a6f5-1fbf347ceeb1\") " pod="openshift-ingress-canary/ingress-canary-mn2kc" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.234217 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d094a74-23ff-48bd-a4eb-b5f6e7a1fbe7-service-ca-bundle\") pod \"authentication-operator-69f744f599-sxmsc\" (UID: \"2d094a74-23ff-48bd-a4eb-b5f6e7a1fbe7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sxmsc" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.234235 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fde7ef59-700e-49a8-87f5-eac2580a1a54-registry-tls\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.234290 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d094a74-23ff-48bd-a4eb-b5f6e7a1fbe7-serving-cert\") pod \"authentication-operator-69f744f599-sxmsc\" (UID: \"2d094a74-23ff-48bd-a4eb-b5f6e7a1fbe7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sxmsc" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.234308 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5z6f\" (UniqueName: \"kubernetes.io/projected/36a21a76-dd28-45fd-b4c6-4220c7c83410-kube-api-access-d5z6f\") pod \"openshift-config-operator-7777fb866f-lxk9k\" (UID: \"36a21a76-dd28-45fd-b4c6-4220c7c83410\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lxk9k" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.234359 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b0a3d34b-f752-4335-99b7-60fa17cde89f-apiservice-cert\") pod \"packageserver-d55dfcdfc-f5g9m\" (UID: \"b0a3d34b-f752-4335-99b7-60fa17cde89f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f5g9m" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.234386 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b54b872d-8289-4d13-85e5-af8db7a35e8f-signing-cabundle\") pod \"service-ca-9c57cc56f-gdsdh\" (UID: \"b54b872d-8289-4d13-85e5-af8db7a35e8f\") " pod="openshift-service-ca/service-ca-9c57cc56f-gdsdh" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.234403 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3b21d39-3456-4a12-a91b-459864e74087-config-volume\") pod \"collect-profiles-29414940-6mmxx\" (UID: \"e3b21d39-3456-4a12-a91b-459864e74087\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414940-6mmxx" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.234419 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drvpg\" (UniqueName: \"kubernetes.io/projected/6a28fbd4-a08e-4189-936c-f1a544953752-kube-api-access-drvpg\") pod \"olm-operator-6b444d44fb-zvbs8\" (UID: \"6a28fbd4-a08e-4189-936c-f1a544953752\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zvbs8" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.234469 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4bwv\" (UniqueName: \"kubernetes.io/projected/b54b872d-8289-4d13-85e5-af8db7a35e8f-kube-api-access-f4bwv\") pod \"service-ca-9c57cc56f-gdsdh\" (UID: \"b54b872d-8289-4d13-85e5-af8db7a35e8f\") " pod="openshift-service-ca/service-ca-9c57cc56f-gdsdh" Dec 05 01:10:41 crc kubenswrapper[4990]: E1205 01:10:41.237108 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:41.737078275 +0000 UTC m=+140.113293636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.238249 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8f0a906-45b8-4b8e-92b6-bfa0bb540fba-config\") pod \"kube-apiserver-operator-766d6c64bb-mxfcm\" (UID: \"f8f0a906-45b8-4b8e-92b6-bfa0bb540fba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mxfcm" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.238862 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/36a21a76-dd28-45fd-b4c6-4220c7c83410-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lxk9k\" (UID: \"36a21a76-dd28-45fd-b4c6-4220c7c83410\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lxk9k" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.239907 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b0a3d34b-f752-4335-99b7-60fa17cde89f-tmpfs\") pod \"packageserver-d55dfcdfc-f5g9m\" (UID: \"b0a3d34b-f752-4335-99b7-60fa17cde89f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f5g9m" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.240653 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d094a74-23ff-48bd-a4eb-b5f6e7a1fbe7-config\") pod \"authentication-operator-69f744f599-sxmsc\" (UID: \"2d094a74-23ff-48bd-a4eb-b5f6e7a1fbe7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sxmsc" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.243744 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d094a74-23ff-48bd-a4eb-b5f6e7a1fbe7-serving-cert\") pod \"authentication-operator-69f744f599-sxmsc\" (UID: \"2d094a74-23ff-48bd-a4eb-b5f6e7a1fbe7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sxmsc" Dec 05 01:10:41 crc kubenswrapper[4990]: W1205 01:10:41.247028 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod526c34da_f910_47e3_bccc_eff5e2fb7b59.slice/crio-282d287a6d4611901bd4a84661392eacce75c13d9e1d1e3849c190db17a3e2a8 WatchSource:0}: Error finding container 282d287a6d4611901bd4a84661392eacce75c13d9e1d1e3849c190db17a3e2a8: Status 404 returned error can't find the container with id 282d287a6d4611901bd4a84661392eacce75c13d9e1d1e3849c190db17a3e2a8 Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.247051 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6a28fbd4-a08e-4189-936c-f1a544953752-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zvbs8\" (UID: \"6a28fbd4-a08e-4189-936c-f1a544953752\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zvbs8" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.247652 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d094a74-23ff-48bd-a4eb-b5f6e7a1fbe7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-sxmsc\" (UID: \"2d094a74-23ff-48bd-a4eb-b5f6e7a1fbe7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sxmsc" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.247999 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b0a3d34b-f752-4335-99b7-60fa17cde89f-apiservice-cert\") pod \"packageserver-d55dfcdfc-f5g9m\" (UID: \"b0a3d34b-f752-4335-99b7-60fa17cde89f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f5g9m" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.249883 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-nbpzn" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.251355 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fde7ef59-700e-49a8-87f5-eac2580a1a54-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.251858 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d094a74-23ff-48bd-a4eb-b5f6e7a1fbe7-service-ca-bundle\") pod \"authentication-operator-69f744f599-sxmsc\" (UID: \"2d094a74-23ff-48bd-a4eb-b5f6e7a1fbe7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sxmsc" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.253039 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8f0a906-45b8-4b8e-92b6-bfa0bb540fba-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mxfcm\" (UID: \"f8f0a906-45b8-4b8e-92b6-bfa0bb540fba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mxfcm" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.257675 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fde7ef59-700e-49a8-87f5-eac2580a1a54-trusted-ca\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.258981 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fde7ef59-700e-49a8-87f5-eac2580a1a54-registry-certificates\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.269540 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fde7ef59-700e-49a8-87f5-eac2580a1a54-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.270322 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6a28fbd4-a08e-4189-936c-f1a544953752-srv-cert\") pod \"olm-operator-6b444d44fb-zvbs8\" (UID: \"6a28fbd4-a08e-4189-936c-f1a544953752\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zvbs8" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.272555 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fde7ef59-700e-49a8-87f5-eac2580a1a54-registry-tls\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.274864 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36a21a76-dd28-45fd-b4c6-4220c7c83410-serving-cert\") pod \"openshift-config-operator-7777fb866f-lxk9k\" (UID: \"36a21a76-dd28-45fd-b4c6-4220c7c83410\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lxk9k" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.276710 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b0a3d34b-f752-4335-99b7-60fa17cde89f-webhook-cert\") pod \"packageserver-d55dfcdfc-f5g9m\" (UID: \"b0a3d34b-f752-4335-99b7-60fa17cde89f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f5g9m" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.298352 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8f0a906-45b8-4b8e-92b6-bfa0bb540fba-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mxfcm\" (UID: \"f8f0a906-45b8-4b8e-92b6-bfa0bb540fba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mxfcm" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.303991 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5z6f\" (UniqueName: \"kubernetes.io/projected/36a21a76-dd28-45fd-b4c6-4220c7c83410-kube-api-access-d5z6f\") pod \"openshift-config-operator-7777fb866f-lxk9k\" (UID: \"36a21a76-dd28-45fd-b4c6-4220c7c83410\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lxk9k" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.330768 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lxk9k" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.338223 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.338300 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7jv2\" (UniqueName: \"kubernetes.io/projected/dea57213-02e7-4a09-adcd-a67306d41b54-kube-api-access-z7jv2\") pod \"csi-hostpathplugin-6b99n\" (UID: \"dea57213-02e7-4a09-adcd-a67306d41b54\") " pod="hostpath-provisioner/csi-hostpathplugin-6b99n" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.338357 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/dea57213-02e7-4a09-adcd-a67306d41b54-mountpoint-dir\") pod \"csi-hostpathplugin-6b99n\" (UID: \"dea57213-02e7-4a09-adcd-a67306d41b54\") " pod="hostpath-provisioner/csi-hostpathplugin-6b99n" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.338380 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61a82c3b-7baa-448b-a9c6-4647454a2850-config\") pod \"service-ca-operator-777779d784-zp4n6\" (UID: \"61a82c3b-7baa-448b-a9c6-4647454a2850\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zp4n6" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.338412 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9846c\" (UniqueName: \"kubernetes.io/projected/f901191e-752f-4cca-bf08-3274cf6a9254-kube-api-access-9846c\") pod \"marketplace-operator-79b997595-nkp9t\" (UID: \"f901191e-752f-4cca-bf08-3274cf6a9254\") " pod="openshift-marketplace/marketplace-operator-79b997595-nkp9t" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.338431 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dea57213-02e7-4a09-adcd-a67306d41b54-socket-dir\") pod \"csi-hostpathplugin-6b99n\" (UID: \"dea57213-02e7-4a09-adcd-a67306d41b54\") " pod="hostpath-provisioner/csi-hostpathplugin-6b99n" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.338448 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd6gp\" (UniqueName: \"kubernetes.io/projected/1a4b043f-328d-4f7f-9010-6df08452fdc7-kube-api-access-sd6gp\") pod \"machine-config-server-7qlxt\" (UID: \"1a4b043f-328d-4f7f-9010-6df08452fdc7\") " pod="openshift-machine-config-operator/machine-config-server-7qlxt" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.338465 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8ea14d4-f00e-4b39-a6f5-1fbf347ceeb1-cert\") pod \"ingress-canary-mn2kc\" (UID: \"c8ea14d4-f00e-4b39-a6f5-1fbf347ceeb1\") " pod="openshift-ingress-canary/ingress-canary-mn2kc" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.338545 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3b21d39-3456-4a12-a91b-459864e74087-config-volume\") pod \"collect-profiles-29414940-6mmxx\" (UID: \"e3b21d39-3456-4a12-a91b-459864e74087\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414940-6mmxx" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.338563 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b54b872d-8289-4d13-85e5-af8db7a35e8f-signing-cabundle\") pod \"service-ca-9c57cc56f-gdsdh\" (UID: \"b54b872d-8289-4d13-85e5-af8db7a35e8f\") " pod="openshift-service-ca/service-ca-9c57cc56f-gdsdh" Dec 05 01:10:41 crc kubenswrapper[4990]: E1205 01:10:41.338992 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:41.83897919 +0000 UTC m=+140.215194551 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.339097 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61a82c3b-7baa-448b-a9c6-4647454a2850-config\") pod \"service-ca-operator-777779d784-zp4n6\" (UID: \"61a82c3b-7baa-448b-a9c6-4647454a2850\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zp4n6" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.339131 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/dea57213-02e7-4a09-adcd-a67306d41b54-mountpoint-dir\") pod \"csi-hostpathplugin-6b99n\" (UID: \"dea57213-02e7-4a09-adcd-a67306d41b54\") " pod="hostpath-provisioner/csi-hostpathplugin-6b99n" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.339721 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4bwv\" (UniqueName: \"kubernetes.io/projected/b54b872d-8289-4d13-85e5-af8db7a35e8f-kube-api-access-f4bwv\") pod \"service-ca-9c57cc56f-gdsdh\" (UID: \"b54b872d-8289-4d13-85e5-af8db7a35e8f\") " pod="openshift-service-ca/service-ca-9c57cc56f-gdsdh" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.339762 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7rszv"] Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.339765 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvv8x\" (UniqueName: \"kubernetes.io/projected/c8ea14d4-f00e-4b39-a6f5-1fbf347ceeb1-kube-api-access-mvv8x\") pod \"ingress-canary-mn2kc\" (UID: \"c8ea14d4-f00e-4b39-a6f5-1fbf347ceeb1\") " pod="openshift-ingress-canary/ingress-canary-mn2kc" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.339805 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f901191e-752f-4cca-bf08-3274cf6a9254-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nkp9t\" (UID: \"f901191e-752f-4cca-bf08-3274cf6a9254\") " pod="openshift-marketplace/marketplace-operator-79b997595-nkp9t" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.339825 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cbdeca6b-cc7e-4f93-80cf-c5cfc1b18904-metrics-tls\") pod \"dns-default-5jm9j\" (UID: \"cbdeca6b-cc7e-4f93-80cf-c5cfc1b18904\") " pod="openshift-dns/dns-default-5jm9j" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.339879 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v58s\" (UniqueName: \"kubernetes.io/projected/e3b21d39-3456-4a12-a91b-459864e74087-kube-api-access-6v58s\") pod \"collect-profiles-29414940-6mmxx\" (UID: \"e3b21d39-3456-4a12-a91b-459864e74087\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414940-6mmxx" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.339895 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1a4b043f-328d-4f7f-9010-6df08452fdc7-certs\") pod \"machine-config-server-7qlxt\" (UID: \"1a4b043f-328d-4f7f-9010-6df08452fdc7\") " pod="openshift-machine-config-operator/machine-config-server-7qlxt" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.339915 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhcpl\" (UniqueName: \"kubernetes.io/projected/cbdeca6b-cc7e-4f93-80cf-c5cfc1b18904-kube-api-access-fhcpl\") pod \"dns-default-5jm9j\" (UID: \"cbdeca6b-cc7e-4f93-80cf-c5cfc1b18904\") " pod="openshift-dns/dns-default-5jm9j" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.339956 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsnqj\" (UniqueName: \"kubernetes.io/projected/61a82c3b-7baa-448b-a9c6-4647454a2850-kube-api-access-dsnqj\") pod \"service-ca-operator-777779d784-zp4n6\" (UID: \"61a82c3b-7baa-448b-a9c6-4647454a2850\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zp4n6" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.339975 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cbdeca6b-cc7e-4f93-80cf-c5cfc1b18904-config-volume\") pod \"dns-default-5jm9j\" (UID: \"cbdeca6b-cc7e-4f93-80cf-c5cfc1b18904\") " pod="openshift-dns/dns-default-5jm9j" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.339992 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b54b872d-8289-4d13-85e5-af8db7a35e8f-signing-key\") pod \"service-ca-9c57cc56f-gdsdh\" (UID: \"b54b872d-8289-4d13-85e5-af8db7a35e8f\") " pod="openshift-service-ca/service-ca-9c57cc56f-gdsdh" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.340025 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3b21d39-3456-4a12-a91b-459864e74087-secret-volume\") pod \"collect-profiles-29414940-6mmxx\" (UID: \"e3b21d39-3456-4a12-a91b-459864e74087\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414940-6mmxx" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.340042 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61a82c3b-7baa-448b-a9c6-4647454a2850-serving-cert\") pod \"service-ca-operator-777779d784-zp4n6\" (UID: \"61a82c3b-7baa-448b-a9c6-4647454a2850\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zp4n6" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.340058 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/dea57213-02e7-4a09-adcd-a67306d41b54-plugins-dir\") pod \"csi-hostpathplugin-6b99n\" (UID: \"dea57213-02e7-4a09-adcd-a67306d41b54\") " pod="hostpath-provisioner/csi-hostpathplugin-6b99n" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.340074 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1a4b043f-328d-4f7f-9010-6df08452fdc7-node-bootstrap-token\") pod \"machine-config-server-7qlxt\" (UID: \"1a4b043f-328d-4f7f-9010-6df08452fdc7\") " pod="openshift-machine-config-operator/machine-config-server-7qlxt" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.340105 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/dea57213-02e7-4a09-adcd-a67306d41b54-csi-data-dir\") pod \"csi-hostpathplugin-6b99n\" (UID: \"dea57213-02e7-4a09-adcd-a67306d41b54\") " pod="hostpath-provisioner/csi-hostpathplugin-6b99n" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.340145 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dea57213-02e7-4a09-adcd-a67306d41b54-registration-dir\") pod \"csi-hostpathplugin-6b99n\" (UID: \"dea57213-02e7-4a09-adcd-a67306d41b54\") " pod="hostpath-provisioner/csi-hostpathplugin-6b99n" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.340178 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f901191e-752f-4cca-bf08-3274cf6a9254-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nkp9t\" (UID: \"f901191e-752f-4cca-bf08-3274cf6a9254\") " pod="openshift-marketplace/marketplace-operator-79b997595-nkp9t" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.340417 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b54b872d-8289-4d13-85e5-af8db7a35e8f-signing-cabundle\") pod \"service-ca-9c57cc56f-gdsdh\" (UID: \"b54b872d-8289-4d13-85e5-af8db7a35e8f\") " pod="openshift-service-ca/service-ca-9c57cc56f-gdsdh" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.341050 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3b21d39-3456-4a12-a91b-459864e74087-config-volume\") pod \"collect-profiles-29414940-6mmxx\" (UID: \"e3b21d39-3456-4a12-a91b-459864e74087\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414940-6mmxx" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.341701 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dea57213-02e7-4a09-adcd-a67306d41b54-socket-dir\") pod \"csi-hostpathplugin-6b99n\" (UID: \"dea57213-02e7-4a09-adcd-a67306d41b54\") " pod="hostpath-provisioner/csi-hostpathplugin-6b99n" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.346502 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/dea57213-02e7-4a09-adcd-a67306d41b54-csi-data-dir\") pod \"csi-hostpathplugin-6b99n\" (UID: \"dea57213-02e7-4a09-adcd-a67306d41b54\") " pod="hostpath-provisioner/csi-hostpathplugin-6b99n" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.347204 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dea57213-02e7-4a09-adcd-a67306d41b54-registration-dir\") pod \"csi-hostpathplugin-6b99n\" (UID: \"dea57213-02e7-4a09-adcd-a67306d41b54\") " pod="hostpath-provisioner/csi-hostpathplugin-6b99n" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.350583 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f901191e-752f-4cca-bf08-3274cf6a9254-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nkp9t\" (UID: \"f901191e-752f-4cca-bf08-3274cf6a9254\") " pod="openshift-marketplace/marketplace-operator-79b997595-nkp9t" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.351529 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/dea57213-02e7-4a09-adcd-a67306d41b54-plugins-dir\") pod \"csi-hostpathplugin-6b99n\" (UID: \"dea57213-02e7-4a09-adcd-a67306d41b54\") " pod="hostpath-provisioner/csi-hostpathplugin-6b99n" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.352092 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cbdeca6b-cc7e-4f93-80cf-c5cfc1b18904-config-volume\") pod \"dns-default-5jm9j\" (UID: \"cbdeca6b-cc7e-4f93-80cf-c5cfc1b18904\") " pod="openshift-dns/dns-default-5jm9j" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.361717 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8ea14d4-f00e-4b39-a6f5-1fbf347ceeb1-cert\") pod \"ingress-canary-mn2kc\" (UID: \"c8ea14d4-f00e-4b39-a6f5-1fbf347ceeb1\") " pod="openshift-ingress-canary/ingress-canary-mn2kc" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.362538 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drvpg\" (UniqueName: \"kubernetes.io/projected/6a28fbd4-a08e-4189-936c-f1a544953752-kube-api-access-drvpg\") pod \"olm-operator-6b444d44fb-zvbs8\" (UID: \"6a28fbd4-a08e-4189-936c-f1a544953752\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zvbs8" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.364557 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmt5k\" (UniqueName: \"kubernetes.io/projected/b0a3d34b-f752-4335-99b7-60fa17cde89f-kube-api-access-bmt5k\") pod \"packageserver-d55dfcdfc-f5g9m\" (UID: \"b0a3d34b-f752-4335-99b7-60fa17cde89f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f5g9m" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.379686 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f901191e-752f-4cca-bf08-3274cf6a9254-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nkp9t\" (UID: \"f901191e-752f-4cca-bf08-3274cf6a9254\") " pod="openshift-marketplace/marketplace-operator-79b997595-nkp9t" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.381050 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b54b872d-8289-4d13-85e5-af8db7a35e8f-signing-key\") pod \"service-ca-9c57cc56f-gdsdh\" (UID: \"b54b872d-8289-4d13-85e5-af8db7a35e8f\") " pod="openshift-service-ca/service-ca-9c57cc56f-gdsdh" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.386000 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61a82c3b-7baa-448b-a9c6-4647454a2850-serving-cert\") pod \"service-ca-operator-777779d784-zp4n6\" (UID: \"61a82c3b-7baa-448b-a9c6-4647454a2850\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zp4n6" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.394443 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqvq5\" (UniqueName: \"kubernetes.io/projected/2d094a74-23ff-48bd-a4eb-b5f6e7a1fbe7-kube-api-access-fqvq5\") pod \"authentication-operator-69f744f599-sxmsc\" (UID: \"2d094a74-23ff-48bd-a4eb-b5f6e7a1fbe7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sxmsc" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.394772 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tswqd\" (UniqueName: \"kubernetes.io/projected/fde7ef59-700e-49a8-87f5-eac2580a1a54-kube-api-access-tswqd\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.395304 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1a4b043f-328d-4f7f-9010-6df08452fdc7-certs\") pod \"machine-config-server-7qlxt\" (UID: \"1a4b043f-328d-4f7f-9010-6df08452fdc7\") " pod="openshift-machine-config-operator/machine-config-server-7qlxt" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.399413 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1a4b043f-328d-4f7f-9010-6df08452fdc7-node-bootstrap-token\") pod \"machine-config-server-7qlxt\" (UID: \"1a4b043f-328d-4f7f-9010-6df08452fdc7\") " pod="openshift-machine-config-operator/machine-config-server-7qlxt" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.402345 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cbdeca6b-cc7e-4f93-80cf-c5cfc1b18904-metrics-tls\") pod \"dns-default-5jm9j\" (UID: \"cbdeca6b-cc7e-4f93-80cf-c5cfc1b18904\") " pod="openshift-dns/dns-default-5jm9j" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.412361 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mxfcm" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.413368 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3b21d39-3456-4a12-a91b-459864e74087-secret-volume\") pod \"collect-profiles-29414940-6mmxx\" (UID: \"e3b21d39-3456-4a12-a91b-459864e74087\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414940-6mmxx" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.416441 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fde7ef59-700e-49a8-87f5-eac2580a1a54-bound-sa-token\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.439933 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vq4v6"] Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.441142 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:41 crc kubenswrapper[4990]: E1205 01:10:41.441974 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:41.941953127 +0000 UTC m=+140.318168488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.502294 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zvbs8" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.502884 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f5g9m" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.503427 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7jv2\" (UniqueName: \"kubernetes.io/projected/dea57213-02e7-4a09-adcd-a67306d41b54-kube-api-access-z7jv2\") pod \"csi-hostpathplugin-6b99n\" (UID: \"dea57213-02e7-4a09-adcd-a67306d41b54\") " pod="hostpath-provisioner/csi-hostpathplugin-6b99n" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.503426 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9846c\" (UniqueName: \"kubernetes.io/projected/f901191e-752f-4cca-bf08-3274cf6a9254-kube-api-access-9846c\") pod \"marketplace-operator-79b997595-nkp9t\" (UID: \"f901191e-752f-4cca-bf08-3274cf6a9254\") " pod="openshift-marketplace/marketplace-operator-79b997595-nkp9t" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.507910 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nkp9t" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.533342 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ztdtk"] Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.538849 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd6gp\" (UniqueName: \"kubernetes.io/projected/1a4b043f-328d-4f7f-9010-6df08452fdc7-kube-api-access-sd6gp\") pod \"machine-config-server-7qlxt\" (UID: \"1a4b043f-328d-4f7f-9010-6df08452fdc7\") " pod="openshift-machine-config-operator/machine-config-server-7qlxt" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.540299 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qxd47"] Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.542533 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvv8x\" (UniqueName: \"kubernetes.io/projected/c8ea14d4-f00e-4b39-a6f5-1fbf347ceeb1-kube-api-access-mvv8x\") pod \"ingress-canary-mn2kc\" (UID: \"c8ea14d4-f00e-4b39-a6f5-1fbf347ceeb1\") " pod="openshift-ingress-canary/ingress-canary-mn2kc" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.545905 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:41 crc kubenswrapper[4990]: E1205 01:10:41.546322 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:42.046305215 +0000 UTC m=+140.422520586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.549274 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6b99n" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.565409 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsnqj\" (UniqueName: \"kubernetes.io/projected/61a82c3b-7baa-448b-a9c6-4647454a2850-kube-api-access-dsnqj\") pod \"service-ca-operator-777779d784-zp4n6\" (UID: \"61a82c3b-7baa-448b-a9c6-4647454a2850\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zp4n6" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.566865 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v58s\" (UniqueName: \"kubernetes.io/projected/e3b21d39-3456-4a12-a91b-459864e74087-kube-api-access-6v58s\") pod \"collect-profiles-29414940-6mmxx\" (UID: \"e3b21d39-3456-4a12-a91b-459864e74087\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414940-6mmxx" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.575990 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-7qlxt" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.576590 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4bwv\" (UniqueName: \"kubernetes.io/projected/b54b872d-8289-4d13-85e5-af8db7a35e8f-kube-api-access-f4bwv\") pod \"service-ca-9c57cc56f-gdsdh\" (UID: \"b54b872d-8289-4d13-85e5-af8db7a35e8f\") " pod="openshift-service-ca/service-ca-9c57cc56f-gdsdh" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.577473 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-mn2kc" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.598708 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhcpl\" (UniqueName: \"kubernetes.io/projected/cbdeca6b-cc7e-4f93-80cf-c5cfc1b18904-kube-api-access-fhcpl\") pod \"dns-default-5jm9j\" (UID: \"cbdeca6b-cc7e-4f93-80cf-c5cfc1b18904\") " pod="openshift-dns/dns-default-5jm9j" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.649719 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:41 crc kubenswrapper[4990]: E1205 01:10:41.650151 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:42.150131207 +0000 UTC m=+140.526346568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.668661 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-sxmsc" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.753761 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:41 crc kubenswrapper[4990]: E1205 01:10:41.754206 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:42.254192456 +0000 UTC m=+140.630407817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.808775 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zp4n6" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.825191 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414940-6mmxx" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.828187 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2wzr5"] Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.829201 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-gdsdh" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.855527 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:41 crc kubenswrapper[4990]: E1205 01:10:41.855956 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:42.355932747 +0000 UTC m=+140.732148098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.857128 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5jm9j" Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.872255 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gxktl" event={"ID":"1291830b-16cb-4eab-9d73-6edd38a86882","Type":"ContainerStarted","Data":"1002daa77e9ca4c125aa2ea50f842f64deb371e897f15841b98b304b8e912766"} Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.873464 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" event={"ID":"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33","Type":"ContainerStarted","Data":"93dcb27f53f62c5da4d9c7eb3414bff0d15a6833996d8d69ce54b9fc36247193"} Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.894592 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ztdtk" event={"ID":"1be72bab-0eb7-4dc5-bc23-9f2eb261d76c","Type":"ContainerStarted","Data":"8d95ed27b69bd304d52be3b4ed5929e4433540bc244cc339ce7a8d859016bc48"} Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.914651 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-42lhm" event={"ID":"e00f3a37-11c1-4862-b7db-c324afcf2214","Type":"ContainerStarted","Data":"42487e209d6c07eaa80272faf07f123b13b6ca2baf16e21373f0f1fc58bc6de1"} Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.956327 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qxd47" event={"ID":"1f42168c-ddac-4d6d-a6c3-d8b3d2beeb6d","Type":"ContainerStarted","Data":"e7deaad65b830be5c2c2c4b667314c0083f44c2dfeec0ce1c7bf9efb0a9e3862"} Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.958743 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:41 crc kubenswrapper[4990]: E1205 01:10:41.959109 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:42.459093279 +0000 UTC m=+140.835308640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.959670 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq4v6" event={"ID":"af7e46a8-0740-4f17-9d89-97eb924a5d39","Type":"ContainerStarted","Data":"31ad212f71fe7ea13624cb4b9e9bf5c24b9909107ebaa076008dccd631785317"} Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.963995 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-slxt5" event={"ID":"d272e8c9-2d62-4783-94bb-a6a997e08c46","Type":"ContainerStarted","Data":"5d4892dd55ec042eec23bae69a4a904affc4aa2ef878c7a4a29525203d665057"} Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.967193 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-g6z24" event={"ID":"b8bb3b38-72ab-4295-8b62-99f5f424c711","Type":"ContainerStarted","Data":"30f43f002aa1a146c6ddcad51c6181741e5c645ab00d5fb1e2311abd6bc354e6"} Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.983332 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fqh9n"] Dec 05 01:10:41 crc kubenswrapper[4990]: I1205 01:10:41.995263 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk" event={"ID":"77c0a017-4985-4ba7-bc61-35e7a17f3950","Type":"ContainerStarted","Data":"47b244faed59ebb933b801c7e47f3afc282a0ae9e9f42d6d2d63eda91f1fd26f"} Dec 05 01:10:42 crc kubenswrapper[4990]: I1205 01:10:42.008538 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-lj2c6" event={"ID":"526c34da-f910-47e3-bccc-eff5e2fb7b59","Type":"ContainerStarted","Data":"282d287a6d4611901bd4a84661392eacce75c13d9e1d1e3849c190db17a3e2a8"} Dec 05 01:10:42 crc kubenswrapper[4990]: I1205 01:10:42.028404 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dhllr" event={"ID":"be821014-926b-4b47-a347-e778ef1d085a","Type":"ContainerStarted","Data":"4dad6bd8e25bd0af0f44f6c94b553dd8421259a08db4357029f884752d178240"} Dec 05 01:10:42 crc kubenswrapper[4990]: I1205 01:10:42.053692 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jx8km" event={"ID":"13a6b9e4-2ddb-4f53-889d-484647055582","Type":"ContainerStarted","Data":"7853842d98f52de2e3c74f472e97b39192801ee6a346957ed946b661fd094d76"} Dec 05 01:10:42 crc kubenswrapper[4990]: I1205 01:10:42.053747 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jx8km" event={"ID":"13a6b9e4-2ddb-4f53-889d-484647055582","Type":"ContainerStarted","Data":"a456824bfcbffdb637e93835059de5fb83181a4bf478f541812461662f10e1a1"} Dec 05 01:10:42 crc kubenswrapper[4990]: I1205 01:10:42.061122 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:42 crc kubenswrapper[4990]: E1205 01:10:42.062253 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:42.562237871 +0000 UTC m=+140.938453232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:42 crc kubenswrapper[4990]: I1205 01:10:42.071730 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7rszv" event={"ID":"1a0a0305-99f9-45d5-b298-383c5f6cc4f6","Type":"ContainerStarted","Data":"4ff26e5a5c167f373922c1fa5a3b6f4678b225947d149f0618969567976d758b"} Dec 05 01:10:42 crc kubenswrapper[4990]: I1205 01:10:42.079045 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rnwbq" event={"ID":"ad32df98-dc0d-4b57-b02b-b04aa0d9db65","Type":"ContainerStarted","Data":"400d832594b49e01b3eebd8dc630d11e269443869f4960e29fe65b1946c00312"} Dec 05 01:10:42 crc kubenswrapper[4990]: I1205 01:10:42.079122 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rnwbq" event={"ID":"ad32df98-dc0d-4b57-b02b-b04aa0d9db65","Type":"ContainerStarted","Data":"e962c12597a79367744632de4b75a851c1fcea03f3f76d444876e5d3c1a17c7c"} Dec 05 01:10:42 crc kubenswrapper[4990]: I1205 01:10:42.079129 4990 patch_prober.go:28] interesting pod/downloads-7954f5f757-lwm9p container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 05 01:10:42 crc kubenswrapper[4990]: I1205 01:10:42.079190 4990 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lwm9p" podUID="a567c246-6c6e-4f05-bde4-dc9d9dfc3699" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 05 01:10:42 crc kubenswrapper[4990]: I1205 01:10:42.163243 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:42 crc kubenswrapper[4990]: E1205 01:10:42.165327 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:42.665313341 +0000 UTC m=+141.041528702 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:42 crc kubenswrapper[4990]: I1205 01:10:42.265063 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:42 crc kubenswrapper[4990]: E1205 01:10:42.265770 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:42.76565715 +0000 UTC m=+141.141872511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:42 crc kubenswrapper[4990]: I1205 01:10:42.368233 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:42 crc kubenswrapper[4990]: E1205 01:10:42.368822 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:42.868803932 +0000 UTC m=+141.245019293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:42 crc kubenswrapper[4990]: I1205 01:10:42.475615 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:42 crc kubenswrapper[4990]: E1205 01:10:42.476160 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:42.976135119 +0000 UTC m=+141.352350480 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:42 crc kubenswrapper[4990]: I1205 01:10:42.578663 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:42 crc kubenswrapper[4990]: E1205 01:10:42.579255 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:43.079236859 +0000 UTC m=+141.455452230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:42 crc kubenswrapper[4990]: I1205 01:10:42.651158 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jx8km" podStartSLOduration=121.651131451 podStartE2EDuration="2m1.651131451s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:42.623080045 +0000 UTC m=+140.999295416" watchObservedRunningTime="2025-12-05 01:10:42.651131451 +0000 UTC m=+141.027346812" Dec 05 01:10:42 crc kubenswrapper[4990]: I1205 01:10:42.683796 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:42 crc kubenswrapper[4990]: E1205 01:10:42.684436 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:43.184416572 +0000 UTC m=+141.560631933 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:42 crc kubenswrapper[4990]: I1205 01:10:42.748056 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-lwm9p" podStartSLOduration=121.748023916 podStartE2EDuration="2m1.748023916s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:42.71692426 +0000 UTC m=+141.093139621" watchObservedRunningTime="2025-12-05 01:10:42.748023916 +0000 UTC m=+141.124239277" Dec 05 01:10:42 crc kubenswrapper[4990]: I1205 01:10:42.785526 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:42 crc kubenswrapper[4990]: E1205 01:10:42.786022 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:43.286005358 +0000 UTC m=+141.662220719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:42 crc kubenswrapper[4990]: I1205 01:10:42.793181 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lxk9k"] Dec 05 01:10:42 crc kubenswrapper[4990]: I1205 01:10:42.799528 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5thbl"] Dec 05 01:10:42 crc kubenswrapper[4990]: I1205 01:10:42.812013 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-glp4b"] Dec 05 01:10:42 crc kubenswrapper[4990]: I1205 01:10:42.850208 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ggfl6"] Dec 05 01:10:42 crc kubenswrapper[4990]: I1205 01:10:42.888839 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:42 crc kubenswrapper[4990]: E1205 01:10:42.890060 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:43.390036726 +0000 UTC m=+141.766252087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:42 crc kubenswrapper[4990]: I1205 01:10:42.893442 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x29jp" podStartSLOduration=121.893399856 podStartE2EDuration="2m1.893399856s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:42.87907852 +0000 UTC m=+141.255293881" watchObservedRunningTime="2025-12-05 01:10:42.893399856 +0000 UTC m=+141.269615217" Dec 05 01:10:42 crc kubenswrapper[4990]: I1205 01:10:42.991729 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:42 crc kubenswrapper[4990]: E1205 01:10:42.994357 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:43.494338633 +0000 UTC m=+141.870553994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.025694 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wqk2q" podStartSLOduration=121.025677236 podStartE2EDuration="2m1.025677236s" podCreationTimestamp="2025-12-05 01:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:42.980107249 +0000 UTC m=+141.356322600" watchObservedRunningTime="2025-12-05 01:10:43.025677236 +0000 UTC m=+141.401892597" Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.027054 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk" podStartSLOduration=121.027049157 podStartE2EDuration="2m1.027049157s" podCreationTimestamp="2025-12-05 01:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:43.024396328 +0000 UTC m=+141.400611689" watchObservedRunningTime="2025-12-05 01:10:43.027049157 +0000 UTC m=+141.403264518" Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.093290 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:43 crc kubenswrapper[4990]: E1205 01:10:43.093896 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:43.593878457 +0000 UTC m=+141.970093808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.108066 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-slxt5" event={"ID":"d272e8c9-2d62-4783-94bb-a6a997e08c46","Type":"ContainerStarted","Data":"5e354dd6ba09e3b6393747fff9ea02988f0184765502557bb89342e934d25e41"} Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.112986 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5thbl" event={"ID":"9ea184ab-2c2a-4fbc-8598-01783702463f","Type":"ContainerStarted","Data":"59f6a98849709720000fd4e550390f7289fcc5569e121214d8deff78d3a4cf17"} Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.114001 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lxk9k" event={"ID":"36a21a76-dd28-45fd-b4c6-4220c7c83410","Type":"ContainerStarted","Data":"3592f3bedf7050072e15741fa0a707667ebc04dd93b3dc5e6d985dc0c1309a00"} Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.115705 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-lj2c6" event={"ID":"526c34da-f910-47e3-bccc-eff5e2fb7b59","Type":"ContainerStarted","Data":"aff626d20a4845ca5750de804b18bf2518ada93a6ba32801810a3ebe51c2dde2"} Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.117597 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ggfl6" event={"ID":"f8ee3821-a917-4dba-80f5-f7ad854541cf","Type":"ContainerStarted","Data":"88badb0b0a6133823573abb2028ec11121d63b67819f62414dc47cc7b8d587f2"} Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.152467 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-nbpzn" podStartSLOduration=122.152437101 podStartE2EDuration="2m2.152437101s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:43.150418971 +0000 UTC m=+141.526634342" watchObservedRunningTime="2025-12-05 01:10:43.152437101 +0000 UTC m=+141.528652462" Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.167358 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gxktl" event={"ID":"1291830b-16cb-4eab-9d73-6edd38a86882","Type":"ContainerStarted","Data":"5b5d5dedf60310865d0024f3c58025ecf7ae1bca9cd48802bdd0ca14cd3ebbce"} Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.175682 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fqh9n" event={"ID":"96ebb813-fa46-40e3-b728-147dd064f9d4","Type":"ContainerStarted","Data":"cf215c5bbdc8d922f9044aed474901e47536ae3694adcb572102d45dcee60841"} Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.195442 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:43 crc kubenswrapper[4990]: E1205 01:10:43.197615 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:43.697596136 +0000 UTC m=+142.073811497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.230906 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-42lhm" event={"ID":"e00f3a37-11c1-4862-b7db-c324afcf2214","Type":"ContainerStarted","Data":"54c99f000ea5f5cd580c527b1061f6034d23a39f83b28db5f053772b0bae7851"} Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.247407 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-g6z24" event={"ID":"b8bb3b38-72ab-4295-8b62-99f5f424c711","Type":"ContainerStarted","Data":"8a1395a2aa602fb3d1fd4560f54467adef50120d99c9eb431c62a43e4162667b"} Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.278280 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2wzr5" event={"ID":"73009b29-5f92-4552-969c-669c459575ae","Type":"ContainerStarted","Data":"fa9f3b2922fb8d06ccb1ecfeca5d99ccee37bdf0e0021b01b7ed7704b42db4d2"} Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.282009 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-glp4b" event={"ID":"9b3d56bb-86a8-44d1-a8b6-4d669458e1e5","Type":"ContainerStarted","Data":"651f4fed04e286d93a36749e9884fe32912aeea6c3799422695bbe4fe0362112"} Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.297112 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.297427 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-t5772" podStartSLOduration=122.297406229 podStartE2EDuration="2m2.297406229s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:43.296693548 +0000 UTC m=+141.672908909" watchObservedRunningTime="2025-12-05 01:10:43.297406229 +0000 UTC m=+141.673621590" Dec 05 01:10:43 crc kubenswrapper[4990]: E1205 01:10:43.298598 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:43.798580654 +0000 UTC m=+142.174796015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.306982 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qxd47" event={"ID":"1f42168c-ddac-4d6d-a6c3-d8b3d2beeb6d","Type":"ContainerStarted","Data":"86bdef1bcd0a4d578bc37133584e1c1681c3b20eb188168c0f42191ab0ac10cd"} Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.308096 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" event={"ID":"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33","Type":"ContainerStarted","Data":"d4813143ea16f48778a869456379e234fc690eee9ecbe2550278af912db1cdba"} Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.308910 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.333786 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq4v6" event={"ID":"af7e46a8-0740-4f17-9d89-97eb924a5d39","Type":"ContainerStarted","Data":"2b4f82aca50eff3f73c8a16dd6584d86c45e79defdbe0ace0392212a746432b1"} Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.333983 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.341449 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-7qlxt" event={"ID":"1a4b043f-328d-4f7f-9010-6df08452fdc7","Type":"ContainerStarted","Data":"b98481b657b4fd0b400bcb0ddfbde2dd71b2103765fee4d5dd87260b079073e3"} Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.341542 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-7qlxt" event={"ID":"1a4b043f-328d-4f7f-9010-6df08452fdc7","Type":"ContainerStarted","Data":"69894f741dd65043f67d2d21990a72be76c2ea796ca7e8b4a22acec61358f433"} Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.347938 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7rszv" event={"ID":"1a0a0305-99f9-45d5-b298-383c5f6cc4f6","Type":"ContainerStarted","Data":"e6034da07dd178b7ec9afcaf5383250dc5333f231f059e0f7397bd1eae6dd496"} Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.367039 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dhllr" event={"ID":"be821014-926b-4b47-a347-e778ef1d085a","Type":"ContainerStarted","Data":"9327b1f5202671be6430997bebea86acb3119818d8f8a5f7522298e63d729ca4"} Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.383818 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dhllr" podStartSLOduration=122.383792912 podStartE2EDuration="2m2.383792912s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:43.383214395 +0000 UTC m=+141.759429766" watchObservedRunningTime="2025-12-05 01:10:43.383792912 +0000 UTC m=+141.760008273" Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.399555 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:43 crc kubenswrapper[4990]: E1205 01:10:43.401558 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:43.90153455 +0000 UTC m=+142.277750101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.511629 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:43 crc kubenswrapper[4990]: E1205 01:10:43.512774 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:44.012754193 +0000 UTC m=+142.388969554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.522074 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5hlb" podStartSLOduration=122.52204414 podStartE2EDuration="2m2.52204414s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:43.491123479 +0000 UTC m=+141.867338840" watchObservedRunningTime="2025-12-05 01:10:43.52204414 +0000 UTC m=+141.898259501" Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.544670 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-rnwbq" podStartSLOduration=122.544639373 podStartE2EDuration="2m2.544639373s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:43.538057377 +0000 UTC m=+141.914272758" watchObservedRunningTime="2025-12-05 01:10:43.544639373 +0000 UTC m=+141.920854734" Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.587220 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-gxktl" podStartSLOduration=122.58719588 podStartE2EDuration="2m2.58719588s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:43.584938033 +0000 UTC m=+141.961153394" watchObservedRunningTime="2025-12-05 01:10:43.58719588 +0000 UTC m=+141.963411241" Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.617053 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:43 crc kubenswrapper[4990]: E1205 01:10:43.619708 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:44.119692518 +0000 UTC m=+142.495907939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.623967 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-7qlxt" podStartSLOduration=5.623927044 podStartE2EDuration="5.623927044s" podCreationTimestamp="2025-12-05 01:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:43.615128792 +0000 UTC m=+141.991344153" watchObservedRunningTime="2025-12-05 01:10:43.623927044 +0000 UTC m=+142.000142415" Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.642093 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-slxt5" podStartSLOduration=122.642063224 podStartE2EDuration="2m2.642063224s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:43.639686653 +0000 UTC m=+142.015902014" watchObservedRunningTime="2025-12-05 01:10:43.642063224 +0000 UTC m=+142.018278595" Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.682654 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7rszv" podStartSLOduration=122.682625692 podStartE2EDuration="2m2.682625692s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:43.66308563 +0000 UTC m=+142.039301001" watchObservedRunningTime="2025-12-05 01:10:43.682625692 +0000 UTC m=+142.058841053" Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.684510 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vm68w"] Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.703607 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-g6z24" podStartSLOduration=122.703585237 podStartE2EDuration="2m2.703585237s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:43.697150865 +0000 UTC m=+142.073366226" watchObservedRunningTime="2025-12-05 01:10:43.703585237 +0000 UTC m=+142.079800598" Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.721074 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:43 crc kubenswrapper[4990]: E1205 01:10:43.721457 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:44.221435838 +0000 UTC m=+142.597651199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.749534 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fqh9n" podStartSLOduration=122.749503014 podStartE2EDuration="2m2.749503014s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:43.734851498 +0000 UTC m=+142.111066929" watchObservedRunningTime="2025-12-05 01:10:43.749503014 +0000 UTC m=+142.125718375" Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.778302 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" podStartSLOduration=122.778284461 podStartE2EDuration="2m2.778284461s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:43.775180569 +0000 UTC m=+142.151395930" watchObservedRunningTime="2025-12-05 01:10:43.778284461 +0000 UTC m=+142.154499822" Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.791620 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-wdvxr"] Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.822373 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:43 crc kubenswrapper[4990]: E1205 01:10:43.822866 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:44.322847459 +0000 UTC m=+142.699062820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.836263 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6b99n"] Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.841585 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-lj2c6" podStartSLOduration=122.841545185 podStartE2EDuration="2m2.841545185s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:43.832675131 +0000 UTC m=+142.208890492" watchObservedRunningTime="2025-12-05 01:10:43.841545185 +0000 UTC m=+142.217760546" Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.847288 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vgq6w"] Dec 05 01:10:43 crc kubenswrapper[4990]: W1205 01:10:43.907227 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddea57213_02e7_4a09_adcd_a67306d41b54.slice/crio-b8dc9ee388d521d202212351ae0f1b06d551553a6d0aab2169983c5c4918c360 WatchSource:0}: Error finding container b8dc9ee388d521d202212351ae0f1b06d551553a6d0aab2169983c5c4918c360: Status 404 returned error can't find the container with id b8dc9ee388d521d202212351ae0f1b06d551553a6d0aab2169983c5c4918c360 Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.926616 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:43 crc kubenswrapper[4990]: E1205 01:10:43.926716 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:44.426697862 +0000 UTC m=+142.802913223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.926969 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:43 crc kubenswrapper[4990]: E1205 01:10:43.927303 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:44.427293329 +0000 UTC m=+142.803508690 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:43 crc kubenswrapper[4990]: I1205 01:10:43.976732 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zvbs8"] Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.028236 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:44 crc kubenswrapper[4990]: E1205 01:10:44.028720 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:44.52870018 +0000 UTC m=+142.904915541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.029151 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-sxmsc"] Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.038367 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mxfcm"] Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.040369 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gdsdh"] Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.055657 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-slxt5" Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.064988 4990 patch_prober.go:28] interesting pod/router-default-5444994796-slxt5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 01:10:44 crc kubenswrapper[4990]: [-]has-synced failed: reason withheld Dec 05 01:10:44 crc kubenswrapper[4990]: [+]process-running ok Dec 05 01:10:44 crc kubenswrapper[4990]: healthz check failed Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.065033 4990 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-slxt5" podUID="d272e8c9-2d62-4783-94bb-a6a997e08c46" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.069606 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5jm9j"] Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.072832 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414940-6mmxx"] Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.092132 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-mn2kc"] Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.100650 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nkp9t"] Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.114509 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f5g9m"] Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.118446 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-zp4n6"] Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.133554 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:44 crc kubenswrapper[4990]: E1205 01:10:44.134419 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:44.634395657 +0000 UTC m=+143.010611018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:44 crc kubenswrapper[4990]: W1205 01:10:44.229829 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf901191e_752f_4cca_bf08_3274cf6a9254.slice/crio-6906712a05dce9e2ad9a3d6f22a0b3962f422afab1805df834b9cb0438cecf22 WatchSource:0}: Error finding container 6906712a05dce9e2ad9a3d6f22a0b3962f422afab1805df834b9cb0438cecf22: Status 404 returned error can't find the container with id 6906712a05dce9e2ad9a3d6f22a0b3962f422afab1805df834b9cb0438cecf22 Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.238773 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:44 crc kubenswrapper[4990]: E1205 01:10:44.239099 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:44.739047354 +0000 UTC m=+143.115262715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.239232 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:44 crc kubenswrapper[4990]: E1205 01:10:44.239730 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:44.739721734 +0000 UTC m=+143.115937095 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.341234 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:44 crc kubenswrapper[4990]: E1205 01:10:44.341518 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:44.841450074 +0000 UTC m=+143.217665435 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.341599 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:44 crc kubenswrapper[4990]: E1205 01:10:44.342862 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:44.842823244 +0000 UTC m=+143.219038605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.409162 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-gdsdh" event={"ID":"b54b872d-8289-4d13-85e5-af8db7a35e8f","Type":"ContainerStarted","Data":"4c71982697dd2fbb51f45ff7b71e78e0fbfc19dce734c42e02dc3b26cddba95e"} Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.417893 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zvbs8" event={"ID":"6a28fbd4-a08e-4189-936c-f1a544953752","Type":"ContainerStarted","Data":"b220cd5d58e0853b2542c7b425afbcf1c7061081e0e934629146512055703ab4"} Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.424906 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nkp9t" event={"ID":"f901191e-752f-4cca-bf08-3274cf6a9254","Type":"ContainerStarted","Data":"6906712a05dce9e2ad9a3d6f22a0b3962f422afab1805df834b9cb0438cecf22"} Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.431334 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq4v6" event={"ID":"af7e46a8-0740-4f17-9d89-97eb924a5d39","Type":"ContainerStarted","Data":"5ce78b877e82750e5a54880b357a7fa72fa188ace227bc79299b6d32c2ccead9"} Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.446668 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:44 crc kubenswrapper[4990]: E1205 01:10:44.447767 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:44.94774915 +0000 UTC m=+143.323964511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.448442 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdvxr" event={"ID":"60dfec50-dbbe-4c06-adfe-bd82cbcc8ef7","Type":"ContainerStarted","Data":"d55bbe3ae568298088fcee0f96b690b93d73e109fa75804f487609372bdf4f97"} Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.448516 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdvxr" event={"ID":"60dfec50-dbbe-4c06-adfe-bd82cbcc8ef7","Type":"ContainerStarted","Data":"c6d4ce25f82aaf97ec189faedc70c3e2d0d7417a7e766ad0f8fd24f21c9fe777"} Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.458757 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vq4v6" podStartSLOduration=123.458732257 podStartE2EDuration="2m3.458732257s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:44.457726207 +0000 UTC m=+142.833941568" watchObservedRunningTime="2025-12-05 01:10:44.458732257 +0000 UTC m=+142.834947618" Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.486552 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fqh9n" event={"ID":"96ebb813-fa46-40e3-b728-147dd064f9d4","Type":"ContainerStarted","Data":"3841c66ddc5abfa881b165b2a2c6bc0cc65becf2e7c22da0f2140b4f9c42dc84"} Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.539721 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mxfcm" event={"ID":"f8f0a906-45b8-4b8e-92b6-bfa0bb540fba","Type":"ContainerStarted","Data":"3b6a7acd30cb0f3b4b490199ffd49fcb56554049804606d53dfcf3ae412dd768"} Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.548592 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:44 crc kubenswrapper[4990]: E1205 01:10:44.549916 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:45.049902362 +0000 UTC m=+143.426117723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.556294 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vm68w" event={"ID":"97c12d0b-8415-47ff-a3cf-c8e906620182","Type":"ContainerStarted","Data":"ce6af538b1dd84089b378d2a018e77044782ce0fcbd85ca6cd22f4b057c63687"} Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.556335 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vm68w" event={"ID":"97c12d0b-8415-47ff-a3cf-c8e906620182","Type":"ContainerStarted","Data":"c21b82e85c22bb88dcad6ac6c90fedfe4ce18d8795a696dfb853eb4e0908b7d7"} Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.563854 4990 generic.go:334] "Generic (PLEG): container finished" podID="36a21a76-dd28-45fd-b4c6-4220c7c83410" containerID="0d4f3ba87bc3931b4013a7797c17d6f54e275f2db57f52fff15868aace04270f" exitCode=0 Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.563932 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lxk9k" event={"ID":"36a21a76-dd28-45fd-b4c6-4220c7c83410","Type":"ContainerDied","Data":"0d4f3ba87bc3931b4013a7797c17d6f54e275f2db57f52fff15868aace04270f"} Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.602975 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2wzr5" event={"ID":"73009b29-5f92-4552-969c-669c459575ae","Type":"ContainerStarted","Data":"192d662c9f7212ee7f8242a136ee535aa551f8c2fd51d49d70e5bcc49778d1fe"} Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.604270 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-2wzr5" Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.609445 4990 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-2wzr5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.609503 4990 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-2wzr5" podUID="73009b29-5f92-4552-969c-669c459575ae" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.621003 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-mn2kc" event={"ID":"c8ea14d4-f00e-4b39-a6f5-1fbf347ceeb1","Type":"ContainerStarted","Data":"2fea97801e0e8430c0eae9331059a96b32d03f447e71e45c7bafcb2ad6e8a98d"} Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.626212 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vm68w" podStartSLOduration=123.626198534 podStartE2EDuration="2m3.626198534s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:44.585986887 +0000 UTC m=+142.962202248" watchObservedRunningTime="2025-12-05 01:10:44.626198534 +0000 UTC m=+143.002413895" Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.635879 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.636463 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.644571 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6b99n" event={"ID":"dea57213-02e7-4a09-adcd-a67306d41b54","Type":"ContainerStarted","Data":"b8dc9ee388d521d202212351ae0f1b06d551553a6d0aab2169983c5c4918c360"} Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.650366 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414940-6mmxx" event={"ID":"e3b21d39-3456-4a12-a91b-459864e74087","Type":"ContainerStarted","Data":"107ba6bc990e2929479a00d3c014236f643017c0fd3b6b7658b0f67089320d97"} Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.656227 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:44 crc kubenswrapper[4990]: E1205 01:10:44.656806 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:45.156788585 +0000 UTC m=+143.533003946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.656844 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:44 crc kubenswrapper[4990]: E1205 01:10:44.659020 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:45.158998591 +0000 UTC m=+143.535213982 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.672737 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-2wzr5" podStartSLOduration=123.672707219 podStartE2EDuration="2m3.672707219s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:44.659072093 +0000 UTC m=+143.035287474" watchObservedRunningTime="2025-12-05 01:10:44.672707219 +0000 UTC m=+143.048922580" Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.676889 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ztdtk" event={"ID":"1be72bab-0eb7-4dc5-bc23-9f2eb261d76c","Type":"ContainerStarted","Data":"674fe2e2277844cc92282d267cbe85df5ee33d340048ac352cbbd7ef316f4429"} Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.688300 4990 patch_prober.go:28] interesting pod/apiserver-76f77b778f-gxktl container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 05 01:10:44 crc kubenswrapper[4990]: [+]log ok Dec 05 01:10:44 crc kubenswrapper[4990]: [+]etcd ok Dec 05 01:10:44 crc kubenswrapper[4990]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 05 01:10:44 crc kubenswrapper[4990]: [+]poststarthook/generic-apiserver-start-informers ok Dec 05 01:10:44 crc kubenswrapper[4990]: [+]poststarthook/max-in-flight-filter ok Dec 05 01:10:44 crc kubenswrapper[4990]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 05 01:10:44 crc kubenswrapper[4990]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 05 01:10:44 crc kubenswrapper[4990]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 05 01:10:44 crc kubenswrapper[4990]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 05 01:10:44 crc kubenswrapper[4990]: [+]poststarthook/project.openshift.io-projectcache ok Dec 05 01:10:44 crc kubenswrapper[4990]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 05 01:10:44 crc kubenswrapper[4990]: [+]poststarthook/openshift.io-startinformers ok Dec 05 01:10:44 crc kubenswrapper[4990]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 05 01:10:44 crc kubenswrapper[4990]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 05 01:10:44 crc kubenswrapper[4990]: livez check failed Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.688380 4990 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-gxktl" podUID="1291830b-16cb-4eab-9d73-6edd38a86882" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.758312 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:44 crc kubenswrapper[4990]: E1205 01:10:44.760148 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:45.260125903 +0000 UTC m=+143.636341264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.781051 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-42lhm" event={"ID":"e00f3a37-11c1-4862-b7db-c324afcf2214","Type":"ContainerStarted","Data":"e35ca471ccf9fd5c774d3a4b668c02b051c5dd0197dbc408eb4731b201a7a85c"} Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.799179 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ggfl6" event={"ID":"f8ee3821-a917-4dba-80f5-f7ad854541cf","Type":"ContainerStarted","Data":"fe5b47a76ac8899bfdf00693d1638f3a96607b9397ff3c574766bb5692965b30"} Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.799894 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ggfl6" Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.817787 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-42lhm" podStartSLOduration=123.81776454 podStartE2EDuration="2m3.81776454s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:44.817611475 +0000 UTC m=+143.193826846" watchObservedRunningTime="2025-12-05 01:10:44.81776454 +0000 UTC m=+143.193979901" Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.817900 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ztdtk" podStartSLOduration=123.817895814 podStartE2EDuration="2m3.817895814s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:44.708050462 +0000 UTC m=+143.084265823" watchObservedRunningTime="2025-12-05 01:10:44.817895814 +0000 UTC m=+143.194111165" Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.848421 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ggfl6" Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.850807 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-glp4b" event={"ID":"9b3d56bb-86a8-44d1-a8b6-4d669458e1e5","Type":"ContainerStarted","Data":"5f78c7797a5a6b57a7a3b3ece73279b8d41c22459f6a9acd34e77fbaa3e0550a"} Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.859668 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ggfl6" podStartSLOduration=122.859647137 podStartE2EDuration="2m2.859647137s" podCreationTimestamp="2025-12-05 01:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:44.859134292 +0000 UTC m=+143.235349653" watchObservedRunningTime="2025-12-05 01:10:44.859647137 +0000 UTC m=+143.235862498" Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.860831 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qxd47" event={"ID":"1f42168c-ddac-4d6d-a6c3-d8b3d2beeb6d","Type":"ContainerStarted","Data":"2ea887e4c664f3bd8b1bcdd9572ea1f05d0e0ef73d2161ea68d107142a4679c7"} Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.862130 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:44 crc kubenswrapper[4990]: E1205 01:10:44.865384 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:45.365363527 +0000 UTC m=+143.741578888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.880541 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5jm9j" event={"ID":"cbdeca6b-cc7e-4f93-80cf-c5cfc1b18904","Type":"ContainerStarted","Data":"4736be82bfad36f617564fb6503ad508d9a1a313a7e6e4c6d058434d272357fd"} Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.895636 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk" Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.896104 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk" Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.907245 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk" Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.912142 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5thbl" event={"ID":"9ea184ab-2c2a-4fbc-8598-01783702463f","Type":"ContainerStarted","Data":"c9f621dd9e8aea0e6c813e99ea1b534575ad934dfbf0fea03875c1a217d372fd"} Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.912204 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5thbl" event={"ID":"9ea184ab-2c2a-4fbc-8598-01783702463f","Type":"ContainerStarted","Data":"aef96a5715c0f0d454f78689b8970da876e59fd30390ebfc9f199dbb45f9d4f6"} Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.912893 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5thbl" Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.940440 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vgq6w" event={"ID":"385ea779-f9e4-49c3-acc7-309d7ef1d174","Type":"ContainerStarted","Data":"78f312a117af3a08340a7abe228fb9adae9bd944b537585b9ca370e92ab33f65"} Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.947076 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zp4n6" event={"ID":"61a82c3b-7baa-448b-a9c6-4647454a2850","Type":"ContainerStarted","Data":"f7178613e862fb16fffdc469b430a7e3cc95d621824b8dd4409e7f7c1793cda9"} Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.962656 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-glp4b" podStartSLOduration=123.962627224 podStartE2EDuration="2m3.962627224s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:44.94772189 +0000 UTC m=+143.323937251" watchObservedRunningTime="2025-12-05 01:10:44.962627224 +0000 UTC m=+143.338842585" Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.969196 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:44 crc kubenswrapper[4990]: E1205 01:10:44.971214 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:45.47119178 +0000 UTC m=+143.847407141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.975589 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f5g9m" event={"ID":"b0a3d34b-f752-4335-99b7-60fa17cde89f","Type":"ContainerStarted","Data":"0a3e39d9acd5e03c4ae0e0169dad8b4227e78c5d627ce844338622d86c50e9f2"} Dec 05 01:10:44 crc kubenswrapper[4990]: I1205 01:10:44.981461 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5thbl" podStartSLOduration=122.981425614 podStartE2EDuration="2m2.981425614s" podCreationTimestamp="2025-12-05 01:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:44.977051774 +0000 UTC m=+143.353267135" watchObservedRunningTime="2025-12-05 01:10:44.981425614 +0000 UTC m=+143.357640975" Dec 05 01:10:45 crc kubenswrapper[4990]: I1205 01:10:45.038965 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-sxmsc" event={"ID":"2d094a74-23ff-48bd-a4eb-b5f6e7a1fbe7","Type":"ContainerStarted","Data":"ba2a181426202fd74ed16dd3604a788680c3c4b35ed83f4ba5ccb6eaa0c0ecd4"} Dec 05 01:10:45 crc kubenswrapper[4990]: I1205 01:10:45.054100 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qxd47" podStartSLOduration=124.054031127 podStartE2EDuration="2m4.054031127s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:45.038797053 +0000 UTC m=+143.415012414" watchObservedRunningTime="2025-12-05 01:10:45.054031127 +0000 UTC m=+143.430246488" Dec 05 01:10:45 crc kubenswrapper[4990]: I1205 01:10:45.055454 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qqplk" Dec 05 01:10:45 crc kubenswrapper[4990]: I1205 01:10:45.070955 4990 patch_prober.go:28] interesting pod/router-default-5444994796-slxt5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 01:10:45 crc kubenswrapper[4990]: [-]has-synced failed: reason withheld Dec 05 01:10:45 crc kubenswrapper[4990]: [+]process-running ok Dec 05 01:10:45 crc kubenswrapper[4990]: healthz check failed Dec 05 01:10:45 crc kubenswrapper[4990]: I1205 01:10:45.071028 4990 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-slxt5" podUID="d272e8c9-2d62-4783-94bb-a6a997e08c46" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 01:10:45 crc kubenswrapper[4990]: I1205 01:10:45.073474 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:45 crc kubenswrapper[4990]: E1205 01:10:45.075084 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:45.575056743 +0000 UTC m=+143.951272294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:45 crc kubenswrapper[4990]: I1205 01:10:45.175543 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:45 crc kubenswrapper[4990]: E1205 01:10:45.176021 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:45.675974259 +0000 UTC m=+144.052189620 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:45 crc kubenswrapper[4990]: I1205 01:10:45.177052 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:45 crc kubenswrapper[4990]: E1205 01:10:45.188543 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:45.688423649 +0000 UTC m=+144.064643260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:45 crc kubenswrapper[4990]: I1205 01:10:45.277586 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vgq6w" podStartSLOduration=124.277565554 podStartE2EDuration="2m4.277565554s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:45.212589529 +0000 UTC m=+143.588804890" watchObservedRunningTime="2025-12-05 01:10:45.277565554 +0000 UTC m=+143.653780905" Dec 05 01:10:45 crc kubenswrapper[4990]: I1205 01:10:45.279825 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:45 crc kubenswrapper[4990]: E1205 01:10:45.280253 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:45.780234414 +0000 UTC m=+144.156449775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:45 crc kubenswrapper[4990]: I1205 01:10:45.310475 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-sxmsc" podStartSLOduration=124.310452914 podStartE2EDuration="2m4.310452914s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:45.302814286 +0000 UTC m=+143.679029637" watchObservedRunningTime="2025-12-05 01:10:45.310452914 +0000 UTC m=+143.686668275" Dec 05 01:10:45 crc kubenswrapper[4990]: I1205 01:10:45.383664 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:45 crc kubenswrapper[4990]: E1205 01:10:45.384066 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:45.884049346 +0000 UTC m=+144.260264707 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:45 crc kubenswrapper[4990]: I1205 01:10:45.485323 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:45 crc kubenswrapper[4990]: E1205 01:10:45.485677 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:45.985655912 +0000 UTC m=+144.361871273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:45 crc kubenswrapper[4990]: I1205 01:10:45.587333 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:45 crc kubenswrapper[4990]: E1205 01:10:45.588183 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:46.088165505 +0000 UTC m=+144.464380866 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:45 crc kubenswrapper[4990]: I1205 01:10:45.689412 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:45 crc kubenswrapper[4990]: E1205 01:10:45.689851 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:46.189830783 +0000 UTC m=+144.566046144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:45 crc kubenswrapper[4990]: I1205 01:10:45.790844 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:45 crc kubenswrapper[4990]: E1205 01:10:45.791313 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:46.291288495 +0000 UTC m=+144.667503846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:45 crc kubenswrapper[4990]: I1205 01:10:45.891916 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:45 crc kubenswrapper[4990]: E1205 01:10:45.892163 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:46.392121298 +0000 UTC m=+144.768336659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:45 crc kubenswrapper[4990]: I1205 01:10:45.892636 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:45 crc kubenswrapper[4990]: E1205 01:10:45.893120 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:46.393097537 +0000 UTC m=+144.769312898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:45 crc kubenswrapper[4990]: I1205 01:10:45.994383 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:45 crc kubenswrapper[4990]: E1205 01:10:45.994790 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:46.494756525 +0000 UTC m=+144.870971886 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.045834 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mxfcm" event={"ID":"f8f0a906-45b8-4b8e-92b6-bfa0bb540fba","Type":"ContainerStarted","Data":"a606031cb4ca0d696c26eb1c46a6b2c348766b25b3eb936a6735be18b7c6e808"} Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.048549 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-mn2kc" event={"ID":"c8ea14d4-f00e-4b39-a6f5-1fbf347ceeb1","Type":"ContainerStarted","Data":"9cdb041c405684bf9498f00f7910f6b7c2c1e87a6f74e99c432ac04ac07e6356"} Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.050705 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdvxr" event={"ID":"60dfec50-dbbe-4c06-adfe-bd82cbcc8ef7","Type":"ContainerStarted","Data":"fab439be399328e59e33ef3afc2d8bd4e7d5cc8f25090b6fb2512025fe2faebb"} Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.052396 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f5g9m" event={"ID":"b0a3d34b-f752-4335-99b7-60fa17cde89f","Type":"ContainerStarted","Data":"5ad2e5555e4799113459b8036e61ce25c25a7dafb6452d562ebf0ea344632eab"} Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.053040 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f5g9m" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.055110 4990 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-f5g9m container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.055169 4990 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f5g9m" podUID="b0a3d34b-f752-4335-99b7-60fa17cde89f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.056886 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vm68w" event={"ID":"97c12d0b-8415-47ff-a3cf-c8e906620182","Type":"ContainerStarted","Data":"fc10ad8ddc049aa43448c3dd0608e2da36b4e36f2a21ab8c45a3dc1b6dcc5df2"} Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.057807 4990 patch_prober.go:28] interesting pod/router-default-5444994796-slxt5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 01:10:46 crc kubenswrapper[4990]: [-]has-synced failed: reason withheld Dec 05 01:10:46 crc kubenswrapper[4990]: [+]process-running ok Dec 05 01:10:46 crc kubenswrapper[4990]: healthz check failed Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.057889 4990 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-slxt5" podUID="d272e8c9-2d62-4783-94bb-a6a997e08c46" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.061246 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-gdsdh" event={"ID":"b54b872d-8289-4d13-85e5-af8db7a35e8f","Type":"ContainerStarted","Data":"f58edab073ecbed799bb9db796f4531857dc1407765e6c23b811db3dcf07b8c4"} Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.063385 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414940-6mmxx" event={"ID":"e3b21d39-3456-4a12-a91b-459864e74087","Type":"ContainerStarted","Data":"4afec6235374e956b244388a9a2e025fb6f35b3c77708fd1af1a0ddadb2003a4"} Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.065303 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-sxmsc" event={"ID":"2d094a74-23ff-48bd-a4eb-b5f6e7a1fbe7","Type":"ContainerStarted","Data":"71fa001b9653f89a736aebbf53f44e3284fa1d83edb2512ef7463b802b16680c"} Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.067509 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-glp4b" event={"ID":"9b3d56bb-86a8-44d1-a8b6-4d669458e1e5","Type":"ContainerStarted","Data":"186010dac4cd38d5e0cb4cac60bee2df840e03776ac64c00b28284e1884633c1"} Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.069367 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5jm9j" event={"ID":"cbdeca6b-cc7e-4f93-80cf-c5cfc1b18904","Type":"ContainerStarted","Data":"42e2391b5d9bef2f442a566a079a2320c4d4aece3ce33342719eabc1cef94476"} Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.071840 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vgq6w" event={"ID":"385ea779-f9e4-49c3-acc7-309d7ef1d174","Type":"ContainerStarted","Data":"7701d2ac00870ad8e43eec4ef94d9ecd02fae22b8345c928e68ac4b09317eddc"} Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.073858 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zp4n6" event={"ID":"61a82c3b-7baa-448b-a9c6-4647454a2850","Type":"ContainerStarted","Data":"f0f5b4b675dbf5b974d7ab8ec8932dcf025e1967658799ed50f24d7beb7bd4e2"} Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.075388 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nkp9t" event={"ID":"f901191e-752f-4cca-bf08-3274cf6a9254","Type":"ContainerStarted","Data":"a17f708f9ab3738b1d4a6f860a5814b4938a31c15609e34442334d78ade5b5dd"} Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.076287 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-nkp9t" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.077910 4990 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-nkp9t container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.077954 4990 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-nkp9t" podUID="f901191e-752f-4cca-bf08-3274cf6a9254" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.078783 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lxk9k" event={"ID":"36a21a76-dd28-45fd-b4c6-4220c7c83410","Type":"ContainerStarted","Data":"da5894c5252228f36c5019acf44a7cb52e66df989c7ab61ed3c7e286cf88be3c"} Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.078996 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lxk9k" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.081611 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mxfcm" podStartSLOduration=125.081592221 podStartE2EDuration="2m5.081592221s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:46.078454858 +0000 UTC m=+144.454670219" watchObservedRunningTime="2025-12-05 01:10:46.081592221 +0000 UTC m=+144.457807582" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.082844 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zvbs8" event={"ID":"6a28fbd4-a08e-4189-936c-f1a544953752","Type":"ContainerStarted","Data":"937345c1b969e87d4f7e84e7efb28114910c1bdd4c0d1d9fbcc6058804c92114"} Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.083339 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zvbs8" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.087412 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-2wzr5" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.096332 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:46 crc kubenswrapper[4990]: E1205 01:10:46.096858 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:46.596843875 +0000 UTC m=+144.973059236 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.116173 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zvbs8" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.119796 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zp4n6" podStartSLOduration=124.119767988 podStartE2EDuration="2m4.119767988s" podCreationTimestamp="2025-12-05 01:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:46.119563522 +0000 UTC m=+144.495778883" watchObservedRunningTime="2025-12-05 01:10:46.119767988 +0000 UTC m=+144.495983349" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.145020 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdvxr" podStartSLOduration=125.14499806 podStartE2EDuration="2m5.14499806s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:46.142289879 +0000 UTC m=+144.518505240" watchObservedRunningTime="2025-12-05 01:10:46.14499806 +0000 UTC m=+144.521213421" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.160009 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f5g9m" podStartSLOduration=124.159985336 podStartE2EDuration="2m4.159985336s" podCreationTimestamp="2025-12-05 01:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:46.159963775 +0000 UTC m=+144.536179136" watchObservedRunningTime="2025-12-05 01:10:46.159985336 +0000 UTC m=+144.536200697" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.184407 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29414940-6mmxx" podStartSLOduration=125.184384493 podStartE2EDuration="2m5.184384493s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:46.181441785 +0000 UTC m=+144.557657146" watchObservedRunningTime="2025-12-05 01:10:46.184384493 +0000 UTC m=+144.560599854" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.200328 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-gdsdh" podStartSLOduration=124.200307127 podStartE2EDuration="2m4.200307127s" podCreationTimestamp="2025-12-05 01:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:46.198507563 +0000 UTC m=+144.574722924" watchObservedRunningTime="2025-12-05 01:10:46.200307127 +0000 UTC m=+144.576522488" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.203010 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:46 crc kubenswrapper[4990]: E1205 01:10:46.203237 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:46.703194413 +0000 UTC m=+145.079409774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.205431 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:46 crc kubenswrapper[4990]: E1205 01:10:46.205762 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:46.705748269 +0000 UTC m=+145.081963630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.227506 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-mn2kc" podStartSLOduration=8.227472366 podStartE2EDuration="8.227472366s" podCreationTimestamp="2025-12-05 01:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:46.221973382 +0000 UTC m=+144.598188743" watchObservedRunningTime="2025-12-05 01:10:46.227472366 +0000 UTC m=+144.603687717" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.244787 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rrngg"] Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.267080 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rrngg"] Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.267236 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rrngg" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.276131 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.276765 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-nkp9t" podStartSLOduration=124.276747974 podStartE2EDuration="2m4.276747974s" podCreationTimestamp="2025-12-05 01:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:46.274125495 +0000 UTC m=+144.650340856" watchObservedRunningTime="2025-12-05 01:10:46.276747974 +0000 UTC m=+144.652963335" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.318055 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.318191 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cbdb0ba-36d8-4cb7-878a-88afedb7983c-catalog-content\") pod \"certified-operators-rrngg\" (UID: \"0cbdb0ba-36d8-4cb7-878a-88afedb7983c\") " pod="openshift-marketplace/certified-operators-rrngg" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.318217 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxjzd\" (UniqueName: \"kubernetes.io/projected/0cbdb0ba-36d8-4cb7-878a-88afedb7983c-kube-api-access-nxjzd\") pod \"certified-operators-rrngg\" (UID: \"0cbdb0ba-36d8-4cb7-878a-88afedb7983c\") " pod="openshift-marketplace/certified-operators-rrngg" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.318287 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cbdb0ba-36d8-4cb7-878a-88afedb7983c-utilities\") pod \"certified-operators-rrngg\" (UID: \"0cbdb0ba-36d8-4cb7-878a-88afedb7983c\") " pod="openshift-marketplace/certified-operators-rrngg" Dec 05 01:10:46 crc kubenswrapper[4990]: E1205 01:10:46.318398 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:46.818378093 +0000 UTC m=+145.194593454 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.416527 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zvbs8" podStartSLOduration=124.416500296 podStartE2EDuration="2m4.416500296s" podCreationTimestamp="2025-12-05 01:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:46.394177491 +0000 UTC m=+144.770392852" watchObservedRunningTime="2025-12-05 01:10:46.416500296 +0000 UTC m=+144.792715657" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.418048 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lxk9k" podStartSLOduration=125.418044352 podStartE2EDuration="2m5.418044352s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:46.417368242 +0000 UTC m=+144.793583603" watchObservedRunningTime="2025-12-05 01:10:46.418044352 +0000 UTC m=+144.794259713" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.419066 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cbdb0ba-36d8-4cb7-878a-88afedb7983c-utilities\") pod \"certified-operators-rrngg\" (UID: \"0cbdb0ba-36d8-4cb7-878a-88afedb7983c\") " pod="openshift-marketplace/certified-operators-rrngg" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.419983 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cbdb0ba-36d8-4cb7-878a-88afedb7983c-utilities\") pod \"certified-operators-rrngg\" (UID: \"0cbdb0ba-36d8-4cb7-878a-88afedb7983c\") " pod="openshift-marketplace/certified-operators-rrngg" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.420067 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cbdb0ba-36d8-4cb7-878a-88afedb7983c-catalog-content\") pod \"certified-operators-rrngg\" (UID: \"0cbdb0ba-36d8-4cb7-878a-88afedb7983c\") " pod="openshift-marketplace/certified-operators-rrngg" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.420088 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxjzd\" (UniqueName: \"kubernetes.io/projected/0cbdb0ba-36d8-4cb7-878a-88afedb7983c-kube-api-access-nxjzd\") pod \"certified-operators-rrngg\" (UID: \"0cbdb0ba-36d8-4cb7-878a-88afedb7983c\") " pod="openshift-marketplace/certified-operators-rrngg" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.420303 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cbdb0ba-36d8-4cb7-878a-88afedb7983c-catalog-content\") pod \"certified-operators-rrngg\" (UID: \"0cbdb0ba-36d8-4cb7-878a-88afedb7983c\") " pod="openshift-marketplace/certified-operators-rrngg" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.420361 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:46 crc kubenswrapper[4990]: E1205 01:10:46.420965 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:46.920953618 +0000 UTC m=+145.297168979 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.424442 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7vm5c"] Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.425962 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7vm5c" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.436706 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.460087 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7vm5c"] Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.467423 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxjzd\" (UniqueName: \"kubernetes.io/projected/0cbdb0ba-36d8-4cb7-878a-88afedb7983c-kube-api-access-nxjzd\") pod \"certified-operators-rrngg\" (UID: \"0cbdb0ba-36d8-4cb7-878a-88afedb7983c\") " pod="openshift-marketplace/certified-operators-rrngg" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.522255 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.522394 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4100dc4e-10a0-4d5c-b441-c87e80787d93-utilities\") pod \"community-operators-7vm5c\" (UID: \"4100dc4e-10a0-4d5c-b441-c87e80787d93\") " pod="openshift-marketplace/community-operators-7vm5c" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.522473 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhd2n\" (UniqueName: \"kubernetes.io/projected/4100dc4e-10a0-4d5c-b441-c87e80787d93-kube-api-access-xhd2n\") pod \"community-operators-7vm5c\" (UID: \"4100dc4e-10a0-4d5c-b441-c87e80787d93\") " pod="openshift-marketplace/community-operators-7vm5c" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.522511 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4100dc4e-10a0-4d5c-b441-c87e80787d93-catalog-content\") pod \"community-operators-7vm5c\" (UID: \"4100dc4e-10a0-4d5c-b441-c87e80787d93\") " pod="openshift-marketplace/community-operators-7vm5c" Dec 05 01:10:46 crc kubenswrapper[4990]: E1205 01:10:46.522642 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:47.022625537 +0000 UTC m=+145.398840898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.605502 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rrngg" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.623888 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4100dc4e-10a0-4d5c-b441-c87e80787d93-catalog-content\") pod \"community-operators-7vm5c\" (UID: \"4100dc4e-10a0-4d5c-b441-c87e80787d93\") " pod="openshift-marketplace/community-operators-7vm5c" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.623936 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.623991 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4100dc4e-10a0-4d5c-b441-c87e80787d93-utilities\") pod \"community-operators-7vm5c\" (UID: \"4100dc4e-10a0-4d5c-b441-c87e80787d93\") " pod="openshift-marketplace/community-operators-7vm5c" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.624069 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhd2n\" (UniqueName: \"kubernetes.io/projected/4100dc4e-10a0-4d5c-b441-c87e80787d93-kube-api-access-xhd2n\") pod \"community-operators-7vm5c\" (UID: \"4100dc4e-10a0-4d5c-b441-c87e80787d93\") " pod="openshift-marketplace/community-operators-7vm5c" Dec 05 01:10:46 crc kubenswrapper[4990]: E1205 01:10:46.624450 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:47.124427059 +0000 UTC m=+145.500642420 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.624564 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4100dc4e-10a0-4d5c-b441-c87e80787d93-catalog-content\") pod \"community-operators-7vm5c\" (UID: \"4100dc4e-10a0-4d5c-b441-c87e80787d93\") " pod="openshift-marketplace/community-operators-7vm5c" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.624614 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4100dc4e-10a0-4d5c-b441-c87e80787d93-utilities\") pod \"community-operators-7vm5c\" (UID: \"4100dc4e-10a0-4d5c-b441-c87e80787d93\") " pod="openshift-marketplace/community-operators-7vm5c" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.649636 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6l6n4"] Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.650643 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6l6n4" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.663033 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6l6n4"] Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.694339 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhd2n\" (UniqueName: \"kubernetes.io/projected/4100dc4e-10a0-4d5c-b441-c87e80787d93-kube-api-access-xhd2n\") pod \"community-operators-7vm5c\" (UID: \"4100dc4e-10a0-4d5c-b441-c87e80787d93\") " pod="openshift-marketplace/community-operators-7vm5c" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.726239 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.726583 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5db06188-8c87-4ce2-a928-97b7ebf55976-catalog-content\") pod \"certified-operators-6l6n4\" (UID: \"5db06188-8c87-4ce2-a928-97b7ebf55976\") " pod="openshift-marketplace/certified-operators-6l6n4" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.726632 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5db06188-8c87-4ce2-a928-97b7ebf55976-utilities\") pod \"certified-operators-6l6n4\" (UID: \"5db06188-8c87-4ce2-a928-97b7ebf55976\") " pod="openshift-marketplace/certified-operators-6l6n4" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.726688 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjv5p\" (UniqueName: \"kubernetes.io/projected/5db06188-8c87-4ce2-a928-97b7ebf55976-kube-api-access-gjv5p\") pod \"certified-operators-6l6n4\" (UID: \"5db06188-8c87-4ce2-a928-97b7ebf55976\") " pod="openshift-marketplace/certified-operators-6l6n4" Dec 05 01:10:46 crc kubenswrapper[4990]: E1205 01:10:46.726854 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:47.226830069 +0000 UTC m=+145.603045430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.747566 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7vm5c" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.829061 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5db06188-8c87-4ce2-a928-97b7ebf55976-catalog-content\") pod \"certified-operators-6l6n4\" (UID: \"5db06188-8c87-4ce2-a928-97b7ebf55976\") " pod="openshift-marketplace/certified-operators-6l6n4" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.829131 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5db06188-8c87-4ce2-a928-97b7ebf55976-utilities\") pod \"certified-operators-6l6n4\" (UID: \"5db06188-8c87-4ce2-a928-97b7ebf55976\") " pod="openshift-marketplace/certified-operators-6l6n4" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.829173 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.829220 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjv5p\" (UniqueName: \"kubernetes.io/projected/5db06188-8c87-4ce2-a928-97b7ebf55976-kube-api-access-gjv5p\") pod \"certified-operators-6l6n4\" (UID: \"5db06188-8c87-4ce2-a928-97b7ebf55976\") " pod="openshift-marketplace/certified-operators-6l6n4" Dec 05 01:10:46 crc kubenswrapper[4990]: E1205 01:10:46.829976 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:47.32996132 +0000 UTC m=+145.706176681 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.830001 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5db06188-8c87-4ce2-a928-97b7ebf55976-utilities\") pod \"certified-operators-6l6n4\" (UID: \"5db06188-8c87-4ce2-a928-97b7ebf55976\") " pod="openshift-marketplace/certified-operators-6l6n4" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.830145 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5db06188-8c87-4ce2-a928-97b7ebf55976-catalog-content\") pod \"certified-operators-6l6n4\" (UID: \"5db06188-8c87-4ce2-a928-97b7ebf55976\") " pod="openshift-marketplace/certified-operators-6l6n4" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.851226 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2fx8c"] Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.852209 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2fx8c" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.885686 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjv5p\" (UniqueName: \"kubernetes.io/projected/5db06188-8c87-4ce2-a928-97b7ebf55976-kube-api-access-gjv5p\") pod \"certified-operators-6l6n4\" (UID: \"5db06188-8c87-4ce2-a928-97b7ebf55976\") " pod="openshift-marketplace/certified-operators-6l6n4" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.901748 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2fx8c"] Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.930285 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.930511 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d30b3185-30ec-4a3f-a149-1073cd20ee46-catalog-content\") pod \"community-operators-2fx8c\" (UID: \"d30b3185-30ec-4a3f-a149-1073cd20ee46\") " pod="openshift-marketplace/community-operators-2fx8c" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.930537 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5j7r\" (UniqueName: \"kubernetes.io/projected/d30b3185-30ec-4a3f-a149-1073cd20ee46-kube-api-access-z5j7r\") pod \"community-operators-2fx8c\" (UID: \"d30b3185-30ec-4a3f-a149-1073cd20ee46\") " pod="openshift-marketplace/community-operators-2fx8c" Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.930573 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d30b3185-30ec-4a3f-a149-1073cd20ee46-utilities\") pod \"community-operators-2fx8c\" (UID: \"d30b3185-30ec-4a3f-a149-1073cd20ee46\") " pod="openshift-marketplace/community-operators-2fx8c" Dec 05 01:10:46 crc kubenswrapper[4990]: E1205 01:10:46.930867 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:47.430837725 +0000 UTC m=+145.807053086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:46 crc kubenswrapper[4990]: I1205 01:10:46.983924 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6l6n4" Dec 05 01:10:47 crc kubenswrapper[4990]: I1205 01:10:47.032246 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:47 crc kubenswrapper[4990]: I1205 01:10:47.032858 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d30b3185-30ec-4a3f-a149-1073cd20ee46-catalog-content\") pod \"community-operators-2fx8c\" (UID: \"d30b3185-30ec-4a3f-a149-1073cd20ee46\") " pod="openshift-marketplace/community-operators-2fx8c" Dec 05 01:10:47 crc kubenswrapper[4990]: I1205 01:10:47.032884 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5j7r\" (UniqueName: \"kubernetes.io/projected/d30b3185-30ec-4a3f-a149-1073cd20ee46-kube-api-access-z5j7r\") pod \"community-operators-2fx8c\" (UID: \"d30b3185-30ec-4a3f-a149-1073cd20ee46\") " pod="openshift-marketplace/community-operators-2fx8c" Dec 05 01:10:47 crc kubenswrapper[4990]: I1205 01:10:47.032919 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d30b3185-30ec-4a3f-a149-1073cd20ee46-utilities\") pod \"community-operators-2fx8c\" (UID: \"d30b3185-30ec-4a3f-a149-1073cd20ee46\") " pod="openshift-marketplace/community-operators-2fx8c" Dec 05 01:10:47 crc kubenswrapper[4990]: E1205 01:10:47.033928 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:47.533906154 +0000 UTC m=+145.910121515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:47 crc kubenswrapper[4990]: I1205 01:10:47.034099 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d30b3185-30ec-4a3f-a149-1073cd20ee46-utilities\") pod \"community-operators-2fx8c\" (UID: \"d30b3185-30ec-4a3f-a149-1073cd20ee46\") " pod="openshift-marketplace/community-operators-2fx8c" Dec 05 01:10:47 crc kubenswrapper[4990]: I1205 01:10:47.034462 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d30b3185-30ec-4a3f-a149-1073cd20ee46-catalog-content\") pod \"community-operators-2fx8c\" (UID: \"d30b3185-30ec-4a3f-a149-1073cd20ee46\") " pod="openshift-marketplace/community-operators-2fx8c" Dec 05 01:10:47 crc kubenswrapper[4990]: I1205 01:10:47.061695 4990 patch_prober.go:28] interesting pod/router-default-5444994796-slxt5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 01:10:47 crc kubenswrapper[4990]: [-]has-synced failed: reason withheld Dec 05 01:10:47 crc kubenswrapper[4990]: [+]process-running ok Dec 05 01:10:47 crc kubenswrapper[4990]: healthz check failed Dec 05 01:10:47 crc kubenswrapper[4990]: I1205 01:10:47.061769 4990 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-slxt5" podUID="d272e8c9-2d62-4783-94bb-a6a997e08c46" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 01:10:47 crc kubenswrapper[4990]: I1205 01:10:47.095665 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5j7r\" (UniqueName: \"kubernetes.io/projected/d30b3185-30ec-4a3f-a149-1073cd20ee46-kube-api-access-z5j7r\") pod \"community-operators-2fx8c\" (UID: \"d30b3185-30ec-4a3f-a149-1073cd20ee46\") " pod="openshift-marketplace/community-operators-2fx8c" Dec 05 01:10:47 crc kubenswrapper[4990]: I1205 01:10:47.136244 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:47 crc kubenswrapper[4990]: E1205 01:10:47.136785 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:47.636764028 +0000 UTC m=+146.012979389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:47 crc kubenswrapper[4990]: I1205 01:10:47.152817 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5jm9j" event={"ID":"cbdeca6b-cc7e-4f93-80cf-c5cfc1b18904","Type":"ContainerStarted","Data":"5ea10db7173313119b9a0c023e124d2a00a16a93bbc22bd6f482d4ab63e40a4f"} Dec 05 01:10:47 crc kubenswrapper[4990]: I1205 01:10:47.153962 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-5jm9j" Dec 05 01:10:47 crc kubenswrapper[4990]: I1205 01:10:47.171763 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2fx8c" Dec 05 01:10:47 crc kubenswrapper[4990]: I1205 01:10:47.188883 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6b99n" event={"ID":"dea57213-02e7-4a09-adcd-a67306d41b54","Type":"ContainerStarted","Data":"fb46f75f1c28423f2ea3ab59984cb4c9603f69d5700e346c2eba2d473e2a9964"} Dec 05 01:10:47 crc kubenswrapper[4990]: I1205 01:10:47.190445 4990 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-nkp9t container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 05 01:10:47 crc kubenswrapper[4990]: I1205 01:10:47.190536 4990 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-nkp9t" podUID="f901191e-752f-4cca-bf08-3274cf6a9254" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 05 01:10:47 crc kubenswrapper[4990]: I1205 01:10:47.244549 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:47 crc kubenswrapper[4990]: E1205 01:10:47.245081 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:47.745066374 +0000 UTC m=+146.121281735 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:47 crc kubenswrapper[4990]: I1205 01:10:47.348089 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:47 crc kubenswrapper[4990]: E1205 01:10:47.366631 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:47.866603043 +0000 UTC m=+146.242818404 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:47 crc kubenswrapper[4990]: I1205 01:10:47.495197 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:47 crc kubenswrapper[4990]: E1205 01:10:47.496072 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:47.996057759 +0000 UTC m=+146.372273110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:47 crc kubenswrapper[4990]: I1205 01:10:47.549121 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-5jm9j" podStartSLOduration=9.549095599 podStartE2EDuration="9.549095599s" podCreationTimestamp="2025-12-05 01:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:47.266583804 +0000 UTC m=+145.642799155" watchObservedRunningTime="2025-12-05 01:10:47.549095599 +0000 UTC m=+145.925310960" Dec 05 01:10:47 crc kubenswrapper[4990]: I1205 01:10:47.549354 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rrngg"] Dec 05 01:10:47 crc kubenswrapper[4990]: I1205 01:10:47.606679 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:47 crc kubenswrapper[4990]: E1205 01:10:47.607176 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:48.107154498 +0000 UTC m=+146.483369849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:47 crc kubenswrapper[4990]: I1205 01:10:47.711391 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:47 crc kubenswrapper[4990]: E1205 01:10:47.711876 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:48.211861795 +0000 UTC m=+146.588077156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:47 crc kubenswrapper[4990]: I1205 01:10:47.742075 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7vm5c"] Dec 05 01:10:47 crc kubenswrapper[4990]: I1205 01:10:47.812282 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:47 crc kubenswrapper[4990]: E1205 01:10:47.812670 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:48.312637387 +0000 UTC m=+146.688852748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:47 crc kubenswrapper[4990]: I1205 01:10:47.817375 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:47 crc kubenswrapper[4990]: E1205 01:10:47.818016 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:48.317998167 +0000 UTC m=+146.694213518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:47 crc kubenswrapper[4990]: I1205 01:10:47.892618 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6l6n4"] Dec 05 01:10:47 crc kubenswrapper[4990]: I1205 01:10:47.920662 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:47 crc kubenswrapper[4990]: E1205 01:10:47.921065 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:48.421043916 +0000 UTC m=+146.797259277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.023581 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:48 crc kubenswrapper[4990]: E1205 01:10:48.024433 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:48.524417855 +0000 UTC m=+146.900633216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.031059 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2fx8c"] Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.061381 4990 patch_prober.go:28] interesting pod/router-default-5444994796-slxt5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 01:10:48 crc kubenswrapper[4990]: [-]has-synced failed: reason withheld Dec 05 01:10:48 crc kubenswrapper[4990]: [+]process-running ok Dec 05 01:10:48 crc kubenswrapper[4990]: healthz check failed Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.061457 4990 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-slxt5" podUID="d272e8c9-2d62-4783-94bb-a6a997e08c46" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.125376 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:48 crc kubenswrapper[4990]: E1205 01:10:48.125752 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:48.625725372 +0000 UTC m=+147.001940733 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.126234 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:48 crc kubenswrapper[4990]: E1205 01:10:48.126618 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:48.626610018 +0000 UTC m=+147.002825379 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.193040 4990 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-f5g9m container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.193110 4990 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f5g9m" podUID="b0a3d34b-f752-4335-99b7-60fa17cde89f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.228256 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mg56k"] Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.229287 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.229322 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mg56k" Dec 05 01:10:48 crc kubenswrapper[4990]: E1205 01:10:48.229514 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:48.729455041 +0000 UTC m=+147.105670402 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.229694 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:48 crc kubenswrapper[4990]: E1205 01:10:48.229998 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:48.729984067 +0000 UTC m=+147.106199428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.231170 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2fx8c" event={"ID":"d30b3185-30ec-4a3f-a149-1073cd20ee46","Type":"ContainerStarted","Data":"fd97dc0481a2a66515f3ab547a2e5f0233700347f90789c8d9902a4ccbcfddbb"} Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.236101 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.254775 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mg56k"] Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.255281 4990 generic.go:334] "Generic (PLEG): container finished" podID="0cbdb0ba-36d8-4cb7-878a-88afedb7983c" containerID="b6d7fb0fc8ac0eed02944a68148d7de2dc9c07a6d130fe8f7d912c29d0562a4c" exitCode=0 Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.255360 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrngg" event={"ID":"0cbdb0ba-36d8-4cb7-878a-88afedb7983c","Type":"ContainerDied","Data":"b6d7fb0fc8ac0eed02944a68148d7de2dc9c07a6d130fe8f7d912c29d0562a4c"} Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.255405 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrngg" event={"ID":"0cbdb0ba-36d8-4cb7-878a-88afedb7983c","Type":"ContainerStarted","Data":"f87e419ffd7b1e2e8fdbe1a93c2880b3dd50763e358a3baab0cbf952dd2c8d77"} Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.288613 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6l6n4" event={"ID":"5db06188-8c87-4ce2-a928-97b7ebf55976","Type":"ContainerStarted","Data":"a38182b01c3cee5e1900ffd63dfd5caa8f3537cc8b30be1b9904d7d140239d57"} Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.299249 4990 generic.go:334] "Generic (PLEG): container finished" podID="4100dc4e-10a0-4d5c-b441-c87e80787d93" containerID="c40e2a70ed55c200430ea909fd7cef4a34ccca28247412f2d190bd5585914cc1" exitCode=0 Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.299336 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7vm5c" event={"ID":"4100dc4e-10a0-4d5c-b441-c87e80787d93","Type":"ContainerDied","Data":"c40e2a70ed55c200430ea909fd7cef4a34ccca28247412f2d190bd5585914cc1"} Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.299385 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7vm5c" event={"ID":"4100dc4e-10a0-4d5c-b441-c87e80787d93","Type":"ContainerStarted","Data":"4a1a53129bfa46d7e87653d6a41d7328debcac016a9e04af1bbb0fbb30320523"} Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.301661 4990 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.319074 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6b99n" event={"ID":"dea57213-02e7-4a09-adcd-a67306d41b54","Type":"ContainerStarted","Data":"d037095a63402b02a3f11aa1c1890475efb8e8dfa43613b7f479c6e2564480d4"} Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.321823 4990 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-nkp9t container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.321881 4990 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-nkp9t" podUID="f901191e-752f-4cca-bf08-3274cf6a9254" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.341405 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.341648 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df8138ec-8df1-4959-90dd-ee4a224c92f8-catalog-content\") pod \"redhat-marketplace-mg56k\" (UID: \"df8138ec-8df1-4959-90dd-ee4a224c92f8\") " pod="openshift-marketplace/redhat-marketplace-mg56k" Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.341733 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df8138ec-8df1-4959-90dd-ee4a224c92f8-utilities\") pod \"redhat-marketplace-mg56k\" (UID: \"df8138ec-8df1-4959-90dd-ee4a224c92f8\") " pod="openshift-marketplace/redhat-marketplace-mg56k" Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.341751 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvng4\" (UniqueName: \"kubernetes.io/projected/df8138ec-8df1-4959-90dd-ee4a224c92f8-kube-api-access-xvng4\") pod \"redhat-marketplace-mg56k\" (UID: \"df8138ec-8df1-4959-90dd-ee4a224c92f8\") " pod="openshift-marketplace/redhat-marketplace-mg56k" Dec 05 01:10:48 crc kubenswrapper[4990]: E1205 01:10:48.341940 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:48.841919491 +0000 UTC m=+147.218134852 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.420638 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f5g9m" Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.443655 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.443792 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df8138ec-8df1-4959-90dd-ee4a224c92f8-catalog-content\") pod \"redhat-marketplace-mg56k\" (UID: \"df8138ec-8df1-4959-90dd-ee4a224c92f8\") " pod="openshift-marketplace/redhat-marketplace-mg56k" Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.443963 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df8138ec-8df1-4959-90dd-ee4a224c92f8-utilities\") pod \"redhat-marketplace-mg56k\" (UID: \"df8138ec-8df1-4959-90dd-ee4a224c92f8\") " pod="openshift-marketplace/redhat-marketplace-mg56k" Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.443997 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvng4\" (UniqueName: \"kubernetes.io/projected/df8138ec-8df1-4959-90dd-ee4a224c92f8-kube-api-access-xvng4\") pod \"redhat-marketplace-mg56k\" (UID: \"df8138ec-8df1-4959-90dd-ee4a224c92f8\") " pod="openshift-marketplace/redhat-marketplace-mg56k" Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.444888 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df8138ec-8df1-4959-90dd-ee4a224c92f8-catalog-content\") pod \"redhat-marketplace-mg56k\" (UID: \"df8138ec-8df1-4959-90dd-ee4a224c92f8\") " pod="openshift-marketplace/redhat-marketplace-mg56k" Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.446091 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df8138ec-8df1-4959-90dd-ee4a224c92f8-utilities\") pod \"redhat-marketplace-mg56k\" (UID: \"df8138ec-8df1-4959-90dd-ee4a224c92f8\") " pod="openshift-marketplace/redhat-marketplace-mg56k" Dec 05 01:10:48 crc kubenswrapper[4990]: E1205 01:10:48.446174 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:48.946154395 +0000 UTC m=+147.322369936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.485658 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvng4\" (UniqueName: \"kubernetes.io/projected/df8138ec-8df1-4959-90dd-ee4a224c92f8-kube-api-access-xvng4\") pod \"redhat-marketplace-mg56k\" (UID: \"df8138ec-8df1-4959-90dd-ee4a224c92f8\") " pod="openshift-marketplace/redhat-marketplace-mg56k" Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.488100 4990 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.545421 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:48 crc kubenswrapper[4990]: E1205 01:10:48.545667 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:49.045620758 +0000 UTC m=+147.421836119 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.545807 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:48 crc kubenswrapper[4990]: E1205 01:10:48.546247 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:49.046236616 +0000 UTC m=+147.422451977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.585076 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mg56k" Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.638582 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7k6p5"] Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.648006 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.648949 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7k6p5"] Dec 05 01:10:48 crc kubenswrapper[4990]: E1205 01:10:48.649107 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:49.149082299 +0000 UTC m=+147.525297650 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.649113 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7k6p5" Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.753613 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/002db936-950d-4f19-b394-a125b435fda5-catalog-content\") pod \"redhat-marketplace-7k6p5\" (UID: \"002db936-950d-4f19-b394-a125b435fda5\") " pod="openshift-marketplace/redhat-marketplace-7k6p5" Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.754013 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqmrq\" (UniqueName: \"kubernetes.io/projected/002db936-950d-4f19-b394-a125b435fda5-kube-api-access-zqmrq\") pod \"redhat-marketplace-7k6p5\" (UID: \"002db936-950d-4f19-b394-a125b435fda5\") " pod="openshift-marketplace/redhat-marketplace-7k6p5" Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.754097 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/002db936-950d-4f19-b394-a125b435fda5-utilities\") pod \"redhat-marketplace-7k6p5\" (UID: \"002db936-950d-4f19-b394-a125b435fda5\") " pod="openshift-marketplace/redhat-marketplace-7k6p5" Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.754123 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:48 crc kubenswrapper[4990]: E1205 01:10:48.754540 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:49.25452104 +0000 UTC m=+147.630736401 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.855851 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.856123 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:10:48 crc kubenswrapper[4990]: E1205 01:10:48.856245 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:49.356197578 +0000 UTC m=+147.732412939 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.856376 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.856456 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/002db936-950d-4f19-b394-a125b435fda5-catalog-content\") pod \"redhat-marketplace-7k6p5\" (UID: \"002db936-950d-4f19-b394-a125b435fda5\") " pod="openshift-marketplace/redhat-marketplace-7k6p5" Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.856518 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqmrq\" (UniqueName: \"kubernetes.io/projected/002db936-950d-4f19-b394-a125b435fda5-kube-api-access-zqmrq\") pod \"redhat-marketplace-7k6p5\" (UID: \"002db936-950d-4f19-b394-a125b435fda5\") " pod="openshift-marketplace/redhat-marketplace-7k6p5" Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.856743 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/002db936-950d-4f19-b394-a125b435fda5-utilities\") pod \"redhat-marketplace-7k6p5\" (UID: \"002db936-950d-4f19-b394-a125b435fda5\") " pod="openshift-marketplace/redhat-marketplace-7k6p5" Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.856780 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:48 crc kubenswrapper[4990]: E1205 01:10:48.857309 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:49.357294661 +0000 UTC m=+147.733510022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.857439 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/002db936-950d-4f19-b394-a125b435fda5-catalog-content\") pod \"redhat-marketplace-7k6p5\" (UID: \"002db936-950d-4f19-b394-a125b435fda5\") " pod="openshift-marketplace/redhat-marketplace-7k6p5" Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.857745 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/002db936-950d-4f19-b394-a125b435fda5-utilities\") pod \"redhat-marketplace-7k6p5\" (UID: \"002db936-950d-4f19-b394-a125b435fda5\") " pod="openshift-marketplace/redhat-marketplace-7k6p5" Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.861426 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.877157 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.906282 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqmrq\" (UniqueName: \"kubernetes.io/projected/002db936-950d-4f19-b394-a125b435fda5-kube-api-access-zqmrq\") pod \"redhat-marketplace-7k6p5\" (UID: \"002db936-950d-4f19-b394-a125b435fda5\") " pod="openshift-marketplace/redhat-marketplace-7k6p5" Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.958282 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:48 crc kubenswrapper[4990]: E1205 01:10:48.958497 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:49.458450093 +0000 UTC m=+147.834665454 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.958670 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.958715 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.958737 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:10:48 crc kubenswrapper[4990]: E1205 01:10:48.959173 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:49.459151074 +0000 UTC m=+147.835366435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.965346 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.969047 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:10:48 crc kubenswrapper[4990]: I1205 01:10:48.985675 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7k6p5" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.015118 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.015894 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.018419 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.018734 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.027531 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.058337 4990 patch_prober.go:28] interesting pod/router-default-5444994796-slxt5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 01:10:49 crc kubenswrapper[4990]: [-]has-synced failed: reason withheld Dec 05 01:10:49 crc kubenswrapper[4990]: [+]process-running ok Dec 05 01:10:49 crc kubenswrapper[4990]: healthz check failed Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.058401 4990 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-slxt5" podUID="d272e8c9-2d62-4783-94bb-a6a997e08c46" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.059191 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:49 crc kubenswrapper[4990]: E1205 01:10:49.059454 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 01:10:49.559429671 +0000 UTC m=+147.935645022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.059530 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:49 crc kubenswrapper[4990]: E1205 01:10:49.059910 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 01:10:49.559899875 +0000 UTC m=+147.936115236 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vchzm" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.076348 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mg56k"] Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.079147 4990 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-05T01:10:48.488128626Z","Handler":null,"Name":""} Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.086502 4990 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.086542 4990 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.147200 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.156121 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.165370 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.165525 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.166141 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2b060e43-62ca-4e4c-b246-bbf9df30a059-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2b060e43-62ca-4e4c-b246-bbf9df30a059\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.166247 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b060e43-62ca-4e4c-b246-bbf9df30a059-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2b060e43-62ca-4e4c-b246-bbf9df30a059\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.174893 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.201261 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7k6p5"] Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.267343 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2b060e43-62ca-4e4c-b246-bbf9df30a059-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2b060e43-62ca-4e4c-b246-bbf9df30a059\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.267437 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b060e43-62ca-4e4c-b246-bbf9df30a059-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2b060e43-62ca-4e4c-b246-bbf9df30a059\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.267552 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.267547 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2b060e43-62ca-4e4c-b246-bbf9df30a059-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2b060e43-62ca-4e4c-b246-bbf9df30a059\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.275810 4990 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.275872 4990 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.289871 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b060e43-62ca-4e4c-b246-bbf9df30a059-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2b060e43-62ca-4e4c-b246-bbf9df30a059\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.308313 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vchzm\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.341120 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6b99n" event={"ID":"dea57213-02e7-4a09-adcd-a67306d41b54","Type":"ContainerStarted","Data":"fbb7a9fb35208c21dc788dc62098489a06e8d0036ebbc9fcba2946c5d69b9834"} Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.341172 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6b99n" event={"ID":"dea57213-02e7-4a09-adcd-a67306d41b54","Type":"ContainerStarted","Data":"5794c727ef7f843bd2f3d36c0200322eeb4b789fde775aaddb186d32063de559"} Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.345875 4990 generic.go:334] "Generic (PLEG): container finished" podID="d30b3185-30ec-4a3f-a149-1073cd20ee46" containerID="2c6505016510cb04dd726bd2868b4971b07edef142e01b6fd65e5ac648b3f123" exitCode=0 Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.345997 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2fx8c" event={"ID":"d30b3185-30ec-4a3f-a149-1073cd20ee46","Type":"ContainerDied","Data":"2c6505016510cb04dd726bd2868b4971b07edef142e01b6fd65e5ac648b3f123"} Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.348622 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7k6p5" event={"ID":"002db936-950d-4f19-b394-a125b435fda5","Type":"ContainerStarted","Data":"13414b2d644fb252c38d51638add2a5f27fdbc2f76871452b8f675142544b997"} Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.351097 4990 generic.go:334] "Generic (PLEG): container finished" podID="e3b21d39-3456-4a12-a91b-459864e74087" containerID="4afec6235374e956b244388a9a2e025fb6f35b3c77708fd1af1a0ddadb2003a4" exitCode=0 Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.351301 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414940-6mmxx" event={"ID":"e3b21d39-3456-4a12-a91b-459864e74087","Type":"ContainerDied","Data":"4afec6235374e956b244388a9a2e025fb6f35b3c77708fd1af1a0ddadb2003a4"} Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.354924 4990 generic.go:334] "Generic (PLEG): container finished" podID="df8138ec-8df1-4959-90dd-ee4a224c92f8" containerID="89008d52778aae1cfb28ded053f1faf57b2052ac2fbe5af7e123ecc29c27839c" exitCode=0 Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.354987 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mg56k" event={"ID":"df8138ec-8df1-4959-90dd-ee4a224c92f8","Type":"ContainerDied","Data":"89008d52778aae1cfb28ded053f1faf57b2052ac2fbe5af7e123ecc29c27839c"} Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.355020 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mg56k" event={"ID":"df8138ec-8df1-4959-90dd-ee4a224c92f8","Type":"ContainerStarted","Data":"b00d7fc412a9cc8160e5cca60fc2aee9699c0e3eaa0cd547453a3e6d939cb19c"} Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.357975 4990 generic.go:334] "Generic (PLEG): container finished" podID="5db06188-8c87-4ce2-a928-97b7ebf55976" containerID="3dc94067903c76be84eaae0fe6d50c63ceaa39c30e1ebd5bd369be21d034c905" exitCode=0 Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.358592 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6l6n4" event={"ID":"5db06188-8c87-4ce2-a928-97b7ebf55976","Type":"ContainerDied","Data":"3dc94067903c76be84eaae0fe6d50c63ceaa39c30e1ebd5bd369be21d034c905"} Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.362441 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-6b99n" podStartSLOduration=11.362417685 podStartE2EDuration="11.362417685s" podCreationTimestamp="2025-12-05 01:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:49.359541629 +0000 UTC m=+147.735756990" watchObservedRunningTime="2025-12-05 01:10:49.362417685 +0000 UTC m=+147.738633046" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.386469 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.431641 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kthx9"] Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.433091 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kthx9" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.438063 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.438785 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.468636 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kthx9"] Dec 05 01:10:49 crc kubenswrapper[4990]: W1205 01:10:49.526094 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-b889a540bb605f5c104a89d17e69e46d73661734c04fe128a9223445b4c74612 WatchSource:0}: Error finding container b889a540bb605f5c104a89d17e69e46d73661734c04fe128a9223445b4c74612: Status 404 returned error can't find the container with id b889a540bb605f5c104a89d17e69e46d73661734c04fe128a9223445b4c74612 Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.572422 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4decd32e-d179-4ec7-9ec0-c8744ef37b47-catalog-content\") pod \"redhat-operators-kthx9\" (UID: \"4decd32e-d179-4ec7-9ec0-c8744ef37b47\") " pod="openshift-marketplace/redhat-operators-kthx9" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.573352 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfnrz\" (UniqueName: \"kubernetes.io/projected/4decd32e-d179-4ec7-9ec0-c8744ef37b47-kube-api-access-hfnrz\") pod \"redhat-operators-kthx9\" (UID: \"4decd32e-d179-4ec7-9ec0-c8744ef37b47\") " pod="openshift-marketplace/redhat-operators-kthx9" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.573429 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4decd32e-d179-4ec7-9ec0-c8744ef37b47-utilities\") pod \"redhat-operators-kthx9\" (UID: \"4decd32e-d179-4ec7-9ec0-c8744ef37b47\") " pod="openshift-marketplace/redhat-operators-kthx9" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.638352 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.643764 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-gxktl" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.675244 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfnrz\" (UniqueName: \"kubernetes.io/projected/4decd32e-d179-4ec7-9ec0-c8744ef37b47-kube-api-access-hfnrz\") pod \"redhat-operators-kthx9\" (UID: \"4decd32e-d179-4ec7-9ec0-c8744ef37b47\") " pod="openshift-marketplace/redhat-operators-kthx9" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.675327 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4decd32e-d179-4ec7-9ec0-c8744ef37b47-utilities\") pod \"redhat-operators-kthx9\" (UID: \"4decd32e-d179-4ec7-9ec0-c8744ef37b47\") " pod="openshift-marketplace/redhat-operators-kthx9" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.675389 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4decd32e-d179-4ec7-9ec0-c8744ef37b47-catalog-content\") pod \"redhat-operators-kthx9\" (UID: \"4decd32e-d179-4ec7-9ec0-c8744ef37b47\") " pod="openshift-marketplace/redhat-operators-kthx9" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.675877 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4decd32e-d179-4ec7-9ec0-c8744ef37b47-catalog-content\") pod \"redhat-operators-kthx9\" (UID: \"4decd32e-d179-4ec7-9ec0-c8744ef37b47\") " pod="openshift-marketplace/redhat-operators-kthx9" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.676320 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4decd32e-d179-4ec7-9ec0-c8744ef37b47-utilities\") pod \"redhat-operators-kthx9\" (UID: \"4decd32e-d179-4ec7-9ec0-c8744ef37b47\") " pod="openshift-marketplace/redhat-operators-kthx9" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.686405 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-lwm9p" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.712183 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfnrz\" (UniqueName: \"kubernetes.io/projected/4decd32e-d179-4ec7-9ec0-c8744ef37b47-kube-api-access-hfnrz\") pod \"redhat-operators-kthx9\" (UID: \"4decd32e-d179-4ec7-9ec0-c8744ef37b47\") " pod="openshift-marketplace/redhat-operators-kthx9" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.819914 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bs899"] Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.822322 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bs899" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.829361 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kthx9" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.838248 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bs899"] Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.859016 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 05 01:10:49 crc kubenswrapper[4990]: W1205 01:10:49.870715 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod2b060e43_62ca_4e4c_b246_bbf9df30a059.slice/crio-3c33e8c9651415b26a094054438099a14d4b69764cb01e6966e76ed4ee50fa7a WatchSource:0}: Error finding container 3c33e8c9651415b26a094054438099a14d4b69764cb01e6966e76ed4ee50fa7a: Status 404 returned error can't find the container with id 3c33e8c9651415b26a094054438099a14d4b69764cb01e6966e76ed4ee50fa7a Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.893745 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vchzm"] Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.938932 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.979410 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfc49388-0761-4c9a-b28e-dd7d2b043092-utilities\") pod \"redhat-operators-bs899\" (UID: \"dfc49388-0761-4c9a-b28e-dd7d2b043092\") " pod="openshift-marketplace/redhat-operators-bs899" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.979504 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfc49388-0761-4c9a-b28e-dd7d2b043092-catalog-content\") pod \"redhat-operators-bs899\" (UID: \"dfc49388-0761-4c9a-b28e-dd7d2b043092\") " pod="openshift-marketplace/redhat-operators-bs899" Dec 05 01:10:49 crc kubenswrapper[4990]: I1205 01:10:49.979543 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c4wc\" (UniqueName: \"kubernetes.io/projected/dfc49388-0761-4c9a-b28e-dd7d2b043092-kube-api-access-8c4wc\") pod \"redhat-operators-bs899\" (UID: \"dfc49388-0761-4c9a-b28e-dd7d2b043092\") " pod="openshift-marketplace/redhat-operators-bs899" Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.070475 4990 patch_prober.go:28] interesting pod/router-default-5444994796-slxt5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 01:10:50 crc kubenswrapper[4990]: [-]has-synced failed: reason withheld Dec 05 01:10:50 crc kubenswrapper[4990]: [+]process-running ok Dec 05 01:10:50 crc kubenswrapper[4990]: healthz check failed Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.070891 4990 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-slxt5" podUID="d272e8c9-2d62-4783-94bb-a6a997e08c46" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.081381 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c4wc\" (UniqueName: \"kubernetes.io/projected/dfc49388-0761-4c9a-b28e-dd7d2b043092-kube-api-access-8c4wc\") pod \"redhat-operators-bs899\" (UID: \"dfc49388-0761-4c9a-b28e-dd7d2b043092\") " pod="openshift-marketplace/redhat-operators-bs899" Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.081496 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfc49388-0761-4c9a-b28e-dd7d2b043092-utilities\") pod \"redhat-operators-bs899\" (UID: \"dfc49388-0761-4c9a-b28e-dd7d2b043092\") " pod="openshift-marketplace/redhat-operators-bs899" Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.081548 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfc49388-0761-4c9a-b28e-dd7d2b043092-catalog-content\") pod \"redhat-operators-bs899\" (UID: \"dfc49388-0761-4c9a-b28e-dd7d2b043092\") " pod="openshift-marketplace/redhat-operators-bs899" Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.082018 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfc49388-0761-4c9a-b28e-dd7d2b043092-catalog-content\") pod \"redhat-operators-bs899\" (UID: \"dfc49388-0761-4c9a-b28e-dd7d2b043092\") " pod="openshift-marketplace/redhat-operators-bs899" Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.082416 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfc49388-0761-4c9a-b28e-dd7d2b043092-utilities\") pod \"redhat-operators-bs899\" (UID: \"dfc49388-0761-4c9a-b28e-dd7d2b043092\") " pod="openshift-marketplace/redhat-operators-bs899" Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.106629 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c4wc\" (UniqueName: \"kubernetes.io/projected/dfc49388-0761-4c9a-b28e-dd7d2b043092-kube-api-access-8c4wc\") pod \"redhat-operators-bs899\" (UID: \"dfc49388-0761-4c9a-b28e-dd7d2b043092\") " pod="openshift-marketplace/redhat-operators-bs899" Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.129751 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kthx9"] Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.144679 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bs899" Dec 05 01:10:50 crc kubenswrapper[4990]: W1205 01:10:50.156052 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4decd32e_d179_4ec7_9ec0_c8744ef37b47.slice/crio-d5009259f58570980a3721480a4bdf953e357717f098e62e57cc9401a809c3f8 WatchSource:0}: Error finding container d5009259f58570980a3721480a4bdf953e357717f098e62e57cc9401a809c3f8: Status 404 returned error can't find the container with id d5009259f58570980a3721480a4bdf953e357717f098e62e57cc9401a809c3f8 Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.357100 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lxk9k" Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.366214 4990 generic.go:334] "Generic (PLEG): container finished" podID="002db936-950d-4f19-b394-a125b435fda5" containerID="f035754ecced7e6c979df34697149e15a1254e9e1f66eaa658b5615bd8a9d5d3" exitCode=0 Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.366304 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7k6p5" event={"ID":"002db936-950d-4f19-b394-a125b435fda5","Type":"ContainerDied","Data":"f035754ecced7e6c979df34697149e15a1254e9e1f66eaa658b5615bd8a9d5d3"} Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.367729 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3252e52cb89fafe2293bd556f26152a655e6bc1cedfe15bb81e5474f30b8368a"} Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.367752 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3d729ad40218bbd006d50a8c355bc4e2e43abcd5e0e9a15096b271f9608e912b"} Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.371947 4990 generic.go:334] "Generic (PLEG): container finished" podID="4decd32e-d179-4ec7-9ec0-c8744ef37b47" containerID="1971f250d5a59bfa136190e9be00ed62a1cdf70d28349122d200826389c6ec8f" exitCode=0 Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.372008 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kthx9" event={"ID":"4decd32e-d179-4ec7-9ec0-c8744ef37b47","Type":"ContainerDied","Data":"1971f250d5a59bfa136190e9be00ed62a1cdf70d28349122d200826389c6ec8f"} Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.372027 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kthx9" event={"ID":"4decd32e-d179-4ec7-9ec0-c8744ef37b47","Type":"ContainerStarted","Data":"d5009259f58570980a3721480a4bdf953e357717f098e62e57cc9401a809c3f8"} Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.382694 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"0f60f5943111348129bcb508c35cfa4389ea3d31f236ee550ecf960b5fe11777"} Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.383019 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"16c3d37b899e41ae22964d6f8dbd718725b6edea9316851bc31589578e5236a0"} Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.400903 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"7f3dd9876465db0e23d095530d22324486df62c5f9da2183a1b473a6a17cf81b"} Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.400995 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b889a540bb605f5c104a89d17e69e46d73661734c04fe128a9223445b4c74612"} Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.402147 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.412498 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bs899"] Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.413178 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" event={"ID":"fde7ef59-700e-49a8-87f5-eac2580a1a54","Type":"ContainerStarted","Data":"a2545ac0fd01f29409d1d6dea3d30b94fadae0ef0f02c5615a79548cf7fa997b"} Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.413310 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" event={"ID":"fde7ef59-700e-49a8-87f5-eac2580a1a54","Type":"ContainerStarted","Data":"6ef3034794a240df575c9064a25c428b6cba214c2cafec0d6bfade89a9559a7e"} Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.417428 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.429775 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2b060e43-62ca-4e4c-b246-bbf9df30a059","Type":"ContainerStarted","Data":"2ec0ab799268e3e52b1886292469103f20b551db606b64d5038ced5577be5a78"} Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.429850 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2b060e43-62ca-4e4c-b246-bbf9df30a059","Type":"ContainerStarted","Data":"3c33e8c9651415b26a094054438099a14d4b69764cb01e6966e76ed4ee50fa7a"} Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.471218 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.471189358 podStartE2EDuration="1.471189358s" podCreationTimestamp="2025-12-05 01:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:50.466130568 +0000 UTC m=+148.842345929" watchObservedRunningTime="2025-12-05 01:10:50.471189358 +0000 UTC m=+148.847404719" Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.490024 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" podStartSLOduration=129.489997158 podStartE2EDuration="2m9.489997158s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:10:50.489066711 +0000 UTC m=+148.865282072" watchObservedRunningTime="2025-12-05 01:10:50.489997158 +0000 UTC m=+148.866212519" Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.719159 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-g6z24" Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.719737 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-g6z24" Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.723057 4990 patch_prober.go:28] interesting pod/console-f9d7485db-g6z24 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.723129 4990 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-g6z24" podUID="b8bb3b38-72ab-4295-8b62-99f5f424c711" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.728457 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414940-6mmxx" Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.821308 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v58s\" (UniqueName: \"kubernetes.io/projected/e3b21d39-3456-4a12-a91b-459864e74087-kube-api-access-6v58s\") pod \"e3b21d39-3456-4a12-a91b-459864e74087\" (UID: \"e3b21d39-3456-4a12-a91b-459864e74087\") " Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.821444 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3b21d39-3456-4a12-a91b-459864e74087-secret-volume\") pod \"e3b21d39-3456-4a12-a91b-459864e74087\" (UID: \"e3b21d39-3456-4a12-a91b-459864e74087\") " Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.821556 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3b21d39-3456-4a12-a91b-459864e74087-config-volume\") pod \"e3b21d39-3456-4a12-a91b-459864e74087\" (UID: \"e3b21d39-3456-4a12-a91b-459864e74087\") " Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.823441 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3b21d39-3456-4a12-a91b-459864e74087-config-volume" (OuterVolumeSpecName: "config-volume") pod "e3b21d39-3456-4a12-a91b-459864e74087" (UID: "e3b21d39-3456-4a12-a91b-459864e74087"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.830815 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3b21d39-3456-4a12-a91b-459864e74087-kube-api-access-6v58s" (OuterVolumeSpecName: "kube-api-access-6v58s") pod "e3b21d39-3456-4a12-a91b-459864e74087" (UID: "e3b21d39-3456-4a12-a91b-459864e74087"). InnerVolumeSpecName "kube-api-access-6v58s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.831097 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b21d39-3456-4a12-a91b-459864e74087-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e3b21d39-3456-4a12-a91b-459864e74087" (UID: "e3b21d39-3456-4a12-a91b-459864e74087"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.923235 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v58s\" (UniqueName: \"kubernetes.io/projected/e3b21d39-3456-4a12-a91b-459864e74087-kube-api-access-6v58s\") on node \"crc\" DevicePath \"\"" Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.923276 4990 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3b21d39-3456-4a12-a91b-459864e74087-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 01:10:50 crc kubenswrapper[4990]: I1205 01:10:50.923287 4990 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3b21d39-3456-4a12-a91b-459864e74087-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 01:10:51 crc kubenswrapper[4990]: I1205 01:10:51.072709 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-slxt5" Dec 05 01:10:51 crc kubenswrapper[4990]: I1205 01:10:51.075913 4990 patch_prober.go:28] interesting pod/router-default-5444994796-slxt5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 01:10:51 crc kubenswrapper[4990]: [-]has-synced failed: reason withheld Dec 05 01:10:51 crc kubenswrapper[4990]: [+]process-running ok Dec 05 01:10:51 crc kubenswrapper[4990]: healthz check failed Dec 05 01:10:51 crc kubenswrapper[4990]: I1205 01:10:51.075991 4990 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-slxt5" podUID="d272e8c9-2d62-4783-94bb-a6a997e08c46" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 01:10:51 crc kubenswrapper[4990]: I1205 01:10:51.492397 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414940-6mmxx" Dec 05 01:10:51 crc kubenswrapper[4990]: I1205 01:10:51.493002 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414940-6mmxx" event={"ID":"e3b21d39-3456-4a12-a91b-459864e74087","Type":"ContainerDied","Data":"107ba6bc990e2929479a00d3c014236f643017c0fd3b6b7658b0f67089320d97"} Dec 05 01:10:51 crc kubenswrapper[4990]: I1205 01:10:51.493051 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="107ba6bc990e2929479a00d3c014236f643017c0fd3b6b7658b0f67089320d97" Dec 05 01:10:51 crc kubenswrapper[4990]: I1205 01:10:51.513103 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-nkp9t" Dec 05 01:10:51 crc kubenswrapper[4990]: I1205 01:10:51.518793 4990 generic.go:334] "Generic (PLEG): container finished" podID="dfc49388-0761-4c9a-b28e-dd7d2b043092" containerID="e6961208b1813950aef4bfc4a12f662a8a7c271ddca85337d812e7c38dd6e295" exitCode=0 Dec 05 01:10:51 crc kubenswrapper[4990]: I1205 01:10:51.519147 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bs899" event={"ID":"dfc49388-0761-4c9a-b28e-dd7d2b043092","Type":"ContainerDied","Data":"e6961208b1813950aef4bfc4a12f662a8a7c271ddca85337d812e7c38dd6e295"} Dec 05 01:10:51 crc kubenswrapper[4990]: I1205 01:10:51.519204 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bs899" event={"ID":"dfc49388-0761-4c9a-b28e-dd7d2b043092","Type":"ContainerStarted","Data":"923db79caf2dd58b23c72ca10fba11e8dcbbe35e6c8da860660e73ff1f5e77bb"} Dec 05 01:10:51 crc kubenswrapper[4990]: I1205 01:10:51.528392 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2b060e43-62ca-4e4c-b246-bbf9df30a059","Type":"ContainerDied","Data":"2ec0ab799268e3e52b1886292469103f20b551db606b64d5038ced5577be5a78"} Dec 05 01:10:51 crc kubenswrapper[4990]: I1205 01:10:51.528559 4990 generic.go:334] "Generic (PLEG): container finished" podID="2b060e43-62ca-4e4c-b246-bbf9df30a059" containerID="2ec0ab799268e3e52b1886292469103f20b551db606b64d5038ced5577be5a78" exitCode=0 Dec 05 01:10:51 crc kubenswrapper[4990]: I1205 01:10:51.824254 4990 patch_prober.go:28] interesting pod/machine-config-daemon-zxlh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 01:10:51 crc kubenswrapper[4990]: I1205 01:10:51.824801 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 01:10:52 crc kubenswrapper[4990]: I1205 01:10:52.056459 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-slxt5" Dec 05 01:10:52 crc kubenswrapper[4990]: I1205 01:10:52.065059 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-slxt5" Dec 05 01:10:53 crc kubenswrapper[4990]: I1205 01:10:53.023779 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 01:10:53 crc kubenswrapper[4990]: I1205 01:10:53.197288 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2b060e43-62ca-4e4c-b246-bbf9df30a059-kubelet-dir\") pod \"2b060e43-62ca-4e4c-b246-bbf9df30a059\" (UID: \"2b060e43-62ca-4e4c-b246-bbf9df30a059\") " Dec 05 01:10:53 crc kubenswrapper[4990]: I1205 01:10:53.197432 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b060e43-62ca-4e4c-b246-bbf9df30a059-kube-api-access\") pod \"2b060e43-62ca-4e4c-b246-bbf9df30a059\" (UID: \"2b060e43-62ca-4e4c-b246-bbf9df30a059\") " Dec 05 01:10:53 crc kubenswrapper[4990]: I1205 01:10:53.198174 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b060e43-62ca-4e4c-b246-bbf9df30a059-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2b060e43-62ca-4e4c-b246-bbf9df30a059" (UID: "2b060e43-62ca-4e4c-b246-bbf9df30a059"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:10:53 crc kubenswrapper[4990]: I1205 01:10:53.215803 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b060e43-62ca-4e4c-b246-bbf9df30a059-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2b060e43-62ca-4e4c-b246-bbf9df30a059" (UID: "2b060e43-62ca-4e4c-b246-bbf9df30a059"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:10:53 crc kubenswrapper[4990]: I1205 01:10:53.299855 4990 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2b060e43-62ca-4e4c-b246-bbf9df30a059-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 05 01:10:53 crc kubenswrapper[4990]: I1205 01:10:53.299889 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b060e43-62ca-4e4c-b246-bbf9df30a059-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 01:10:53 crc kubenswrapper[4990]: I1205 01:10:53.609280 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2b060e43-62ca-4e4c-b246-bbf9df30a059","Type":"ContainerDied","Data":"3c33e8c9651415b26a094054438099a14d4b69764cb01e6966e76ed4ee50fa7a"} Dec 05 01:10:53 crc kubenswrapper[4990]: I1205 01:10:53.609729 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c33e8c9651415b26a094054438099a14d4b69764cb01e6966e76ed4ee50fa7a" Dec 05 01:10:53 crc kubenswrapper[4990]: I1205 01:10:53.609800 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 01:10:54 crc kubenswrapper[4990]: I1205 01:10:54.965599 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 05 01:10:54 crc kubenswrapper[4990]: E1205 01:10:54.965956 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b060e43-62ca-4e4c-b246-bbf9df30a059" containerName="pruner" Dec 05 01:10:54 crc kubenswrapper[4990]: I1205 01:10:54.965969 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b060e43-62ca-4e4c-b246-bbf9df30a059" containerName="pruner" Dec 05 01:10:54 crc kubenswrapper[4990]: E1205 01:10:54.965985 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b21d39-3456-4a12-a91b-459864e74087" containerName="collect-profiles" Dec 05 01:10:54 crc kubenswrapper[4990]: I1205 01:10:54.965991 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b21d39-3456-4a12-a91b-459864e74087" containerName="collect-profiles" Dec 05 01:10:54 crc kubenswrapper[4990]: I1205 01:10:54.966120 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b21d39-3456-4a12-a91b-459864e74087" containerName="collect-profiles" Dec 05 01:10:54 crc kubenswrapper[4990]: I1205 01:10:54.966137 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b060e43-62ca-4e4c-b246-bbf9df30a059" containerName="pruner" Dec 05 01:10:54 crc kubenswrapper[4990]: I1205 01:10:54.966700 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 01:10:54 crc kubenswrapper[4990]: I1205 01:10:54.968702 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 05 01:10:54 crc kubenswrapper[4990]: I1205 01:10:54.969171 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 05 01:10:54 crc kubenswrapper[4990]: I1205 01:10:54.975402 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 05 01:10:55 crc kubenswrapper[4990]: I1205 01:10:55.033612 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02da0c6f-729f-4042-bdab-dde5d58c1e22-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"02da0c6f-729f-4042-bdab-dde5d58c1e22\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 01:10:55 crc kubenswrapper[4990]: I1205 01:10:55.033902 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02da0c6f-729f-4042-bdab-dde5d58c1e22-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"02da0c6f-729f-4042-bdab-dde5d58c1e22\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 01:10:55 crc kubenswrapper[4990]: I1205 01:10:55.136012 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02da0c6f-729f-4042-bdab-dde5d58c1e22-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"02da0c6f-729f-4042-bdab-dde5d58c1e22\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 01:10:55 crc kubenswrapper[4990]: I1205 01:10:55.136105 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02da0c6f-729f-4042-bdab-dde5d58c1e22-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"02da0c6f-729f-4042-bdab-dde5d58c1e22\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 01:10:55 crc kubenswrapper[4990]: I1205 01:10:55.136272 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02da0c6f-729f-4042-bdab-dde5d58c1e22-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"02da0c6f-729f-4042-bdab-dde5d58c1e22\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 01:10:55 crc kubenswrapper[4990]: I1205 01:10:55.178554 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02da0c6f-729f-4042-bdab-dde5d58c1e22-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"02da0c6f-729f-4042-bdab-dde5d58c1e22\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 01:10:55 crc kubenswrapper[4990]: I1205 01:10:55.291192 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 01:10:55 crc kubenswrapper[4990]: I1205 01:10:55.811132 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 05 01:10:56 crc kubenswrapper[4990]: I1205 01:10:56.788198 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"02da0c6f-729f-4042-bdab-dde5d58c1e22","Type":"ContainerStarted","Data":"d16b31687ef3b30ddaff8bdc32edb4845a6709892c26676dfa5f42eeaec073b8"} Dec 05 01:10:56 crc kubenswrapper[4990]: I1205 01:10:56.861339 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-5jm9j" Dec 05 01:10:57 crc kubenswrapper[4990]: I1205 01:10:57.805329 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"02da0c6f-729f-4042-bdab-dde5d58c1e22","Type":"ContainerStarted","Data":"071aa934188f4dd483560dd9219c8564c2e0fe65a692977dcfde3ec7fe9dc239"} Dec 05 01:10:58 crc kubenswrapper[4990]: I1205 01:10:58.823236 4990 generic.go:334] "Generic (PLEG): container finished" podID="02da0c6f-729f-4042-bdab-dde5d58c1e22" containerID="071aa934188f4dd483560dd9219c8564c2e0fe65a692977dcfde3ec7fe9dc239" exitCode=0 Dec 05 01:10:58 crc kubenswrapper[4990]: I1205 01:10:58.823720 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"02da0c6f-729f-4042-bdab-dde5d58c1e22","Type":"ContainerDied","Data":"071aa934188f4dd483560dd9219c8564c2e0fe65a692977dcfde3ec7fe9dc239"} Dec 05 01:11:00 crc kubenswrapper[4990]: I1205 01:11:00.738382 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-g6z24" Dec 05 01:11:00 crc kubenswrapper[4990]: I1205 01:11:00.744261 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-g6z24" Dec 05 01:11:02 crc kubenswrapper[4990]: I1205 01:11:02.686759 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 01:11:02 crc kubenswrapper[4990]: I1205 01:11:02.765959 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02da0c6f-729f-4042-bdab-dde5d58c1e22-kubelet-dir\") pod \"02da0c6f-729f-4042-bdab-dde5d58c1e22\" (UID: \"02da0c6f-729f-4042-bdab-dde5d58c1e22\") " Dec 05 01:11:02 crc kubenswrapper[4990]: I1205 01:11:02.766171 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02da0c6f-729f-4042-bdab-dde5d58c1e22-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "02da0c6f-729f-4042-bdab-dde5d58c1e22" (UID: "02da0c6f-729f-4042-bdab-dde5d58c1e22"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:11:02 crc kubenswrapper[4990]: I1205 01:11:02.766224 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02da0c6f-729f-4042-bdab-dde5d58c1e22-kube-api-access\") pod \"02da0c6f-729f-4042-bdab-dde5d58c1e22\" (UID: \"02da0c6f-729f-4042-bdab-dde5d58c1e22\") " Dec 05 01:11:02 crc kubenswrapper[4990]: I1205 01:11:02.767095 4990 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02da0c6f-729f-4042-bdab-dde5d58c1e22-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 05 01:11:02 crc kubenswrapper[4990]: I1205 01:11:02.780968 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02da0c6f-729f-4042-bdab-dde5d58c1e22-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "02da0c6f-729f-4042-bdab-dde5d58c1e22" (UID: "02da0c6f-729f-4042-bdab-dde5d58c1e22"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:11:02 crc kubenswrapper[4990]: I1205 01:11:02.850405 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"02da0c6f-729f-4042-bdab-dde5d58c1e22","Type":"ContainerDied","Data":"d16b31687ef3b30ddaff8bdc32edb4845a6709892c26676dfa5f42eeaec073b8"} Dec 05 01:11:02 crc kubenswrapper[4990]: I1205 01:11:02.850456 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d16b31687ef3b30ddaff8bdc32edb4845a6709892c26676dfa5f42eeaec073b8" Dec 05 01:11:02 crc kubenswrapper[4990]: I1205 01:11:02.850515 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 01:11:02 crc kubenswrapper[4990]: I1205 01:11:02.868510 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02da0c6f-729f-4042-bdab-dde5d58c1e22-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 01:11:04 crc kubenswrapper[4990]: I1205 01:11:04.292428 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7760172e-33aa-4de9-bd10-6a92c0851c6e-metrics-certs\") pod \"network-metrics-daemon-bxb6s\" (UID: \"7760172e-33aa-4de9-bd10-6a92c0851c6e\") " pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:11:04 crc kubenswrapper[4990]: I1205 01:11:04.299260 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7760172e-33aa-4de9-bd10-6a92c0851c6e-metrics-certs\") pod \"network-metrics-daemon-bxb6s\" (UID: \"7760172e-33aa-4de9-bd10-6a92c0851c6e\") " pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:11:04 crc kubenswrapper[4990]: I1205 01:11:04.351626 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxb6s" Dec 05 01:11:09 crc kubenswrapper[4990]: I1205 01:11:09.446311 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:11:16 crc kubenswrapper[4990]: E1205 01:11:16.314237 4990 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 05 01:11:16 crc kubenswrapper[4990]: E1205 01:11:16.315737 4990 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xvng4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-mg56k_openshift-marketplace(df8138ec-8df1-4959-90dd-ee4a224c92f8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 05 01:11:16 crc kubenswrapper[4990]: E1205 01:11:16.316947 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-mg56k" podUID="df8138ec-8df1-4959-90dd-ee4a224c92f8" Dec 05 01:11:19 crc kubenswrapper[4990]: E1205 01:11:19.138782 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-mg56k" podUID="df8138ec-8df1-4959-90dd-ee4a224c92f8" Dec 05 01:11:19 crc kubenswrapper[4990]: I1205 01:11:19.161640 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 01:11:19 crc kubenswrapper[4990]: E1205 01:11:19.233230 4990 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 05 01:11:19 crc kubenswrapper[4990]: E1205 01:11:19.233549 4990 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gjv5p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6l6n4_openshift-marketplace(5db06188-8c87-4ce2-a928-97b7ebf55976): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 05 01:11:19 crc kubenswrapper[4990]: E1205 01:11:19.234766 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-6l6n4" podUID="5db06188-8c87-4ce2-a928-97b7ebf55976" Dec 05 01:11:20 crc kubenswrapper[4990]: E1205 01:11:20.627131 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6l6n4" podUID="5db06188-8c87-4ce2-a928-97b7ebf55976" Dec 05 01:11:20 crc kubenswrapper[4990]: E1205 01:11:20.697043 4990 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 05 01:11:20 crc kubenswrapper[4990]: E1205 01:11:20.697263 4990 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xhd2n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-7vm5c_openshift-marketplace(4100dc4e-10a0-4d5c-b441-c87e80787d93): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 05 01:11:20 crc kubenswrapper[4990]: E1205 01:11:20.698539 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-7vm5c" podUID="4100dc4e-10a0-4d5c-b441-c87e80787d93" Dec 05 01:11:20 crc kubenswrapper[4990]: E1205 01:11:20.758193 4990 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 05 01:11:20 crc kubenswrapper[4990]: E1205 01:11:20.758673 4990 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5j7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-2fx8c_openshift-marketplace(d30b3185-30ec-4a3f-a149-1073cd20ee46): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 05 01:11:20 crc kubenswrapper[4990]: E1205 01:11:20.760059 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-2fx8c" podUID="d30b3185-30ec-4a3f-a149-1073cd20ee46" Dec 05 01:11:20 crc kubenswrapper[4990]: E1205 01:11:20.972728 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2fx8c" podUID="d30b3185-30ec-4a3f-a149-1073cd20ee46" Dec 05 01:11:20 crc kubenswrapper[4990]: E1205 01:11:20.973830 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-7vm5c" podUID="4100dc4e-10a0-4d5c-b441-c87e80787d93" Dec 05 01:11:21 crc kubenswrapper[4990]: I1205 01:11:21.120530 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bxb6s"] Dec 05 01:11:21 crc kubenswrapper[4990]: W1205 01:11:21.133684 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7760172e_33aa_4de9_bd10_6a92c0851c6e.slice/crio-886e1c5d031cfaac3fecc9bd3b32593b9bf506dd25a4023573f675f081642d5c WatchSource:0}: Error finding container 886e1c5d031cfaac3fecc9bd3b32593b9bf506dd25a4023573f675f081642d5c: Status 404 returned error can't find the container with id 886e1c5d031cfaac3fecc9bd3b32593b9bf506dd25a4023573f675f081642d5c Dec 05 01:11:21 crc kubenswrapper[4990]: I1205 01:11:21.186893 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5thbl" Dec 05 01:11:21 crc kubenswrapper[4990]: I1205 01:11:21.825136 4990 patch_prober.go:28] interesting pod/machine-config-daemon-zxlh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 01:11:21 crc kubenswrapper[4990]: I1205 01:11:21.825270 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 01:11:21 crc kubenswrapper[4990]: I1205 01:11:21.978202 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bxb6s" event={"ID":"7760172e-33aa-4de9-bd10-6a92c0851c6e","Type":"ContainerStarted","Data":"4c6c9414ed24ee6fa10ad850b544090876a97d4c092e57305b22f43b1cc94a13"} Dec 05 01:11:21 crc kubenswrapper[4990]: I1205 01:11:21.978635 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bxb6s" event={"ID":"7760172e-33aa-4de9-bd10-6a92c0851c6e","Type":"ContainerStarted","Data":"0e3387a6b589d28a273dd16716d2c2ebbaa54dec6898c99fbe3f1766150106ec"} Dec 05 01:11:21 crc kubenswrapper[4990]: I1205 01:11:21.978648 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bxb6s" event={"ID":"7760172e-33aa-4de9-bd10-6a92c0851c6e","Type":"ContainerStarted","Data":"886e1c5d031cfaac3fecc9bd3b32593b9bf506dd25a4023573f675f081642d5c"} Dec 05 01:11:21 crc kubenswrapper[4990]: I1205 01:11:21.980212 4990 generic.go:334] "Generic (PLEG): container finished" podID="002db936-950d-4f19-b394-a125b435fda5" containerID="ac700baa991b5f21571aa2f73b1518846ba81d29e95e7ac151eb966fddba2527" exitCode=0 Dec 05 01:11:21 crc kubenswrapper[4990]: I1205 01:11:21.980296 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7k6p5" event={"ID":"002db936-950d-4f19-b394-a125b435fda5","Type":"ContainerDied","Data":"ac700baa991b5f21571aa2f73b1518846ba81d29e95e7ac151eb966fddba2527"} Dec 05 01:11:21 crc kubenswrapper[4990]: I1205 01:11:21.982641 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kthx9" event={"ID":"4decd32e-d179-4ec7-9ec0-c8744ef37b47","Type":"ContainerStarted","Data":"c1696983f31e76f0a8033f334f4d25d312922b470c6cf2f6b4af02f51a8614e4"} Dec 05 01:11:21 crc kubenswrapper[4990]: I1205 01:11:21.986664 4990 generic.go:334] "Generic (PLEG): container finished" podID="dfc49388-0761-4c9a-b28e-dd7d2b043092" containerID="63b3d1069ed0d35a074ddc34871e4d1500b6752db984ce2c7bd79de50df77f60" exitCode=0 Dec 05 01:11:21 crc kubenswrapper[4990]: I1205 01:11:21.986869 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bs899" event={"ID":"dfc49388-0761-4c9a-b28e-dd7d2b043092","Type":"ContainerDied","Data":"63b3d1069ed0d35a074ddc34871e4d1500b6752db984ce2c7bd79de50df77f60"} Dec 05 01:11:21 crc kubenswrapper[4990]: I1205 01:11:21.991089 4990 generic.go:334] "Generic (PLEG): container finished" podID="0cbdb0ba-36d8-4cb7-878a-88afedb7983c" containerID="ae63c5afeb153d2bab678fc1d932364a8bc090665816bac249d450af15e1d227" exitCode=0 Dec 05 01:11:21 crc kubenswrapper[4990]: I1205 01:11:21.991147 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrngg" event={"ID":"0cbdb0ba-36d8-4cb7-878a-88afedb7983c","Type":"ContainerDied","Data":"ae63c5afeb153d2bab678fc1d932364a8bc090665816bac249d450af15e1d227"} Dec 05 01:11:23 crc kubenswrapper[4990]: I1205 01:11:23.002015 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7k6p5" event={"ID":"002db936-950d-4f19-b394-a125b435fda5","Type":"ContainerStarted","Data":"6bf1ceeebf59753f7a97a5599d9dab2e065c2c6cfb4ccf02409843f14ba5e505"} Dec 05 01:11:23 crc kubenswrapper[4990]: I1205 01:11:23.003994 4990 generic.go:334] "Generic (PLEG): container finished" podID="4decd32e-d179-4ec7-9ec0-c8744ef37b47" containerID="c1696983f31e76f0a8033f334f4d25d312922b470c6cf2f6b4af02f51a8614e4" exitCode=0 Dec 05 01:11:23 crc kubenswrapper[4990]: I1205 01:11:23.004108 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kthx9" event={"ID":"4decd32e-d179-4ec7-9ec0-c8744ef37b47","Type":"ContainerDied","Data":"c1696983f31e76f0a8033f334f4d25d312922b470c6cf2f6b4af02f51a8614e4"} Dec 05 01:11:23 crc kubenswrapper[4990]: I1205 01:11:23.007170 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bs899" event={"ID":"dfc49388-0761-4c9a-b28e-dd7d2b043092","Type":"ContainerStarted","Data":"575540128ce1a2496e9c8c078cc195e921136d5865a1c83d2440b0f60e3154f7"} Dec 05 01:11:23 crc kubenswrapper[4990]: I1205 01:11:23.009753 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrngg" event={"ID":"0cbdb0ba-36d8-4cb7-878a-88afedb7983c","Type":"ContainerStarted","Data":"6c688b54455cfd946ea7f7044077f20d8b6e7a814384c36313fff951e295fff0"} Dec 05 01:11:23 crc kubenswrapper[4990]: I1205 01:11:23.036129 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7k6p5" podStartSLOduration=3.021625892 podStartE2EDuration="35.036097819s" podCreationTimestamp="2025-12-05 01:10:48 +0000 UTC" firstStartedPulling="2025-12-05 01:10:50.368216431 +0000 UTC m=+148.744431802" lastFinishedPulling="2025-12-05 01:11:22.382688358 +0000 UTC m=+180.758903729" observedRunningTime="2025-12-05 01:11:23.033364788 +0000 UTC m=+181.409580149" watchObservedRunningTime="2025-12-05 01:11:23.036097819 +0000 UTC m=+181.412313180" Dec 05 01:11:23 crc kubenswrapper[4990]: I1205 01:11:23.054981 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-bxb6s" podStartSLOduration=162.054958551 podStartE2EDuration="2m42.054958551s" podCreationTimestamp="2025-12-05 01:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:11:23.053284771 +0000 UTC m=+181.429500132" watchObservedRunningTime="2025-12-05 01:11:23.054958551 +0000 UTC m=+181.431173912" Dec 05 01:11:23 crc kubenswrapper[4990]: I1205 01:11:23.105336 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rrngg" podStartSLOduration=3.803529713 podStartE2EDuration="37.105313811s" podCreationTimestamp="2025-12-05 01:10:46 +0000 UTC" firstStartedPulling="2025-12-05 01:10:49.369781544 +0000 UTC m=+147.745996905" lastFinishedPulling="2025-12-05 01:11:22.671565642 +0000 UTC m=+181.047781003" observedRunningTime="2025-12-05 01:11:23.102839507 +0000 UTC m=+181.479054868" watchObservedRunningTime="2025-12-05 01:11:23.105313811 +0000 UTC m=+181.481529172" Dec 05 01:11:23 crc kubenswrapper[4990]: I1205 01:11:23.176315 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bs899" podStartSLOduration=3.304493139 podStartE2EDuration="34.176292365s" podCreationTimestamp="2025-12-05 01:10:49 +0000 UTC" firstStartedPulling="2025-12-05 01:10:51.522107257 +0000 UTC m=+149.898322648" lastFinishedPulling="2025-12-05 01:11:22.393906503 +0000 UTC m=+180.770121874" observedRunningTime="2025-12-05 01:11:23.148659652 +0000 UTC m=+181.524875013" watchObservedRunningTime="2025-12-05 01:11:23.176292365 +0000 UTC m=+181.552507726" Dec 05 01:11:24 crc kubenswrapper[4990]: I1205 01:11:24.029882 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kthx9" event={"ID":"4decd32e-d179-4ec7-9ec0-c8744ef37b47","Type":"ContainerStarted","Data":"76b2b2e6f015dd6a26d82cab2e56c758000785d4eac43508089edd88802dc725"} Dec 05 01:11:24 crc kubenswrapper[4990]: I1205 01:11:24.050998 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kthx9" podStartSLOduration=2.059926362 podStartE2EDuration="35.050976495s" podCreationTimestamp="2025-12-05 01:10:49 +0000 UTC" firstStartedPulling="2025-12-05 01:10:50.376291312 +0000 UTC m=+148.752506663" lastFinishedPulling="2025-12-05 01:11:23.367341435 +0000 UTC m=+181.743556796" observedRunningTime="2025-12-05 01:11:24.046910054 +0000 UTC m=+182.423125425" watchObservedRunningTime="2025-12-05 01:11:24.050976495 +0000 UTC m=+182.427191856" Dec 05 01:11:26 crc kubenswrapper[4990]: I1205 01:11:26.605892 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rrngg" Dec 05 01:11:26 crc kubenswrapper[4990]: I1205 01:11:26.605976 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rrngg" Dec 05 01:11:26 crc kubenswrapper[4990]: I1205 01:11:26.695775 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rrngg" Dec 05 01:11:27 crc kubenswrapper[4990]: I1205 01:11:27.096191 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rrngg" Dec 05 01:11:28 crc kubenswrapper[4990]: I1205 01:11:28.987047 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7k6p5" Dec 05 01:11:28 crc kubenswrapper[4990]: I1205 01:11:28.987132 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7k6p5" Dec 05 01:11:29 crc kubenswrapper[4990]: I1205 01:11:29.036732 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7k6p5" Dec 05 01:11:29 crc kubenswrapper[4990]: I1205 01:11:29.112420 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7k6p5" Dec 05 01:11:29 crc kubenswrapper[4990]: I1205 01:11:29.829872 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kthx9" Dec 05 01:11:29 crc kubenswrapper[4990]: I1205 01:11:29.829953 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kthx9" Dec 05 01:11:29 crc kubenswrapper[4990]: I1205 01:11:29.835518 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7k6p5"] Dec 05 01:11:29 crc kubenswrapper[4990]: I1205 01:11:29.878440 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kthx9" Dec 05 01:11:30 crc kubenswrapper[4990]: I1205 01:11:30.125332 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kthx9" Dec 05 01:11:30 crc kubenswrapper[4990]: I1205 01:11:30.146141 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bs899" Dec 05 01:11:30 crc kubenswrapper[4990]: I1205 01:11:30.146205 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bs899" Dec 05 01:11:30 crc kubenswrapper[4990]: I1205 01:11:30.207206 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bs899" Dec 05 01:11:31 crc kubenswrapper[4990]: I1205 01:11:31.074686 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7k6p5" podUID="002db936-950d-4f19-b394-a125b435fda5" containerName="registry-server" containerID="cri-o://6bf1ceeebf59753f7a97a5599d9dab2e065c2c6cfb4ccf02409843f14ba5e505" gracePeriod=2 Dec 05 01:11:31 crc kubenswrapper[4990]: I1205 01:11:31.141070 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bs899" Dec 05 01:11:31 crc kubenswrapper[4990]: I1205 01:11:31.562338 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 05 01:11:31 crc kubenswrapper[4990]: E1205 01:11:31.563242 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02da0c6f-729f-4042-bdab-dde5d58c1e22" containerName="pruner" Dec 05 01:11:31 crc kubenswrapper[4990]: I1205 01:11:31.563564 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="02da0c6f-729f-4042-bdab-dde5d58c1e22" containerName="pruner" Dec 05 01:11:31 crc kubenswrapper[4990]: I1205 01:11:31.563958 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="02da0c6f-729f-4042-bdab-dde5d58c1e22" containerName="pruner" Dec 05 01:11:31 crc kubenswrapper[4990]: I1205 01:11:31.565102 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 01:11:31 crc kubenswrapper[4990]: I1205 01:11:31.569725 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 05 01:11:31 crc kubenswrapper[4990]: I1205 01:11:31.570372 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 05 01:11:31 crc kubenswrapper[4990]: I1205 01:11:31.580610 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 05 01:11:31 crc kubenswrapper[4990]: I1205 01:11:31.639151 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a20aeaa-57ed-4f60-88b1-d8b88126048e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4a20aeaa-57ed-4f60-88b1-d8b88126048e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 01:11:31 crc kubenswrapper[4990]: I1205 01:11:31.639228 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4a20aeaa-57ed-4f60-88b1-d8b88126048e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4a20aeaa-57ed-4f60-88b1-d8b88126048e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 01:11:31 crc kubenswrapper[4990]: I1205 01:11:31.740705 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a20aeaa-57ed-4f60-88b1-d8b88126048e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4a20aeaa-57ed-4f60-88b1-d8b88126048e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 01:11:31 crc kubenswrapper[4990]: I1205 01:11:31.740759 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4a20aeaa-57ed-4f60-88b1-d8b88126048e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4a20aeaa-57ed-4f60-88b1-d8b88126048e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 01:11:31 crc kubenswrapper[4990]: I1205 01:11:31.740876 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4a20aeaa-57ed-4f60-88b1-d8b88126048e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4a20aeaa-57ed-4f60-88b1-d8b88126048e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 01:11:31 crc kubenswrapper[4990]: I1205 01:11:31.762879 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a20aeaa-57ed-4f60-88b1-d8b88126048e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4a20aeaa-57ed-4f60-88b1-d8b88126048e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 01:11:31 crc kubenswrapper[4990]: I1205 01:11:31.901271 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 01:11:32 crc kubenswrapper[4990]: I1205 01:11:32.191595 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 05 01:11:32 crc kubenswrapper[4990]: I1205 01:11:32.241967 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bs899"] Dec 05 01:11:33 crc kubenswrapper[4990]: I1205 01:11:33.086984 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4a20aeaa-57ed-4f60-88b1-d8b88126048e","Type":"ContainerStarted","Data":"0c2a7170cc2359e5085eba13b0cee846748980bc7f8ef2fb6a0bf65493423672"} Dec 05 01:11:33 crc kubenswrapper[4990]: I1205 01:11:33.087191 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bs899" podUID="dfc49388-0761-4c9a-b28e-dd7d2b043092" containerName="registry-server" containerID="cri-o://575540128ce1a2496e9c8c078cc195e921136d5865a1c83d2440b0f60e3154f7" gracePeriod=2 Dec 05 01:11:35 crc kubenswrapper[4990]: I1205 01:11:35.104743 4990 generic.go:334] "Generic (PLEG): container finished" podID="dfc49388-0761-4c9a-b28e-dd7d2b043092" containerID="575540128ce1a2496e9c8c078cc195e921136d5865a1c83d2440b0f60e3154f7" exitCode=0 Dec 05 01:11:35 crc kubenswrapper[4990]: I1205 01:11:35.105259 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bs899" event={"ID":"dfc49388-0761-4c9a-b28e-dd7d2b043092","Type":"ContainerDied","Data":"575540128ce1a2496e9c8c078cc195e921136d5865a1c83d2440b0f60e3154f7"} Dec 05 01:11:35 crc kubenswrapper[4990]: I1205 01:11:35.109102 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4a20aeaa-57ed-4f60-88b1-d8b88126048e","Type":"ContainerStarted","Data":"32f91addc0299ac3833b6bfd3ca94c5f53cda988d627eeb7afe47fda6312c562"} Dec 05 01:11:35 crc kubenswrapper[4990]: I1205 01:11:35.112830 4990 generic.go:334] "Generic (PLEG): container finished" podID="002db936-950d-4f19-b394-a125b435fda5" containerID="6bf1ceeebf59753f7a97a5599d9dab2e065c2c6cfb4ccf02409843f14ba5e505" exitCode=0 Dec 05 01:11:35 crc kubenswrapper[4990]: I1205 01:11:35.113790 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7k6p5" event={"ID":"002db936-950d-4f19-b394-a125b435fda5","Type":"ContainerDied","Data":"6bf1ceeebf59753f7a97a5599d9dab2e065c2c6cfb4ccf02409843f14ba5e505"} Dec 05 01:11:35 crc kubenswrapper[4990]: I1205 01:11:35.131079 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=4.131047786 podStartE2EDuration="4.131047786s" podCreationTimestamp="2025-12-05 01:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:11:35.128388017 +0000 UTC m=+193.504603468" watchObservedRunningTime="2025-12-05 01:11:35.131047786 +0000 UTC m=+193.507263187" Dec 05 01:11:35 crc kubenswrapper[4990]: I1205 01:11:35.304388 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7k6p5" Dec 05 01:11:35 crc kubenswrapper[4990]: I1205 01:11:35.360711 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bs899" Dec 05 01:11:35 crc kubenswrapper[4990]: I1205 01:11:35.399390 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqmrq\" (UniqueName: \"kubernetes.io/projected/002db936-950d-4f19-b394-a125b435fda5-kube-api-access-zqmrq\") pod \"002db936-950d-4f19-b394-a125b435fda5\" (UID: \"002db936-950d-4f19-b394-a125b435fda5\") " Dec 05 01:11:35 crc kubenswrapper[4990]: I1205 01:11:35.400214 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfc49388-0761-4c9a-b28e-dd7d2b043092-catalog-content\") pod \"dfc49388-0761-4c9a-b28e-dd7d2b043092\" (UID: \"dfc49388-0761-4c9a-b28e-dd7d2b043092\") " Dec 05 01:11:35 crc kubenswrapper[4990]: I1205 01:11:35.400246 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c4wc\" (UniqueName: \"kubernetes.io/projected/dfc49388-0761-4c9a-b28e-dd7d2b043092-kube-api-access-8c4wc\") pod \"dfc49388-0761-4c9a-b28e-dd7d2b043092\" (UID: \"dfc49388-0761-4c9a-b28e-dd7d2b043092\") " Dec 05 01:11:35 crc kubenswrapper[4990]: I1205 01:11:35.400278 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfc49388-0761-4c9a-b28e-dd7d2b043092-utilities\") pod \"dfc49388-0761-4c9a-b28e-dd7d2b043092\" (UID: \"dfc49388-0761-4c9a-b28e-dd7d2b043092\") " Dec 05 01:11:35 crc kubenswrapper[4990]: I1205 01:11:35.400323 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/002db936-950d-4f19-b394-a125b435fda5-utilities\") pod \"002db936-950d-4f19-b394-a125b435fda5\" (UID: \"002db936-950d-4f19-b394-a125b435fda5\") " Dec 05 01:11:35 crc kubenswrapper[4990]: I1205 01:11:35.400352 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/002db936-950d-4f19-b394-a125b435fda5-catalog-content\") pod \"002db936-950d-4f19-b394-a125b435fda5\" (UID: \"002db936-950d-4f19-b394-a125b435fda5\") " Dec 05 01:11:35 crc kubenswrapper[4990]: I1205 01:11:35.401196 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfc49388-0761-4c9a-b28e-dd7d2b043092-utilities" (OuterVolumeSpecName: "utilities") pod "dfc49388-0761-4c9a-b28e-dd7d2b043092" (UID: "dfc49388-0761-4c9a-b28e-dd7d2b043092"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:11:35 crc kubenswrapper[4990]: I1205 01:11:35.401263 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/002db936-950d-4f19-b394-a125b435fda5-utilities" (OuterVolumeSpecName: "utilities") pod "002db936-950d-4f19-b394-a125b435fda5" (UID: "002db936-950d-4f19-b394-a125b435fda5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:11:35 crc kubenswrapper[4990]: I1205 01:11:35.404918 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfc49388-0761-4c9a-b28e-dd7d2b043092-kube-api-access-8c4wc" (OuterVolumeSpecName: "kube-api-access-8c4wc") pod "dfc49388-0761-4c9a-b28e-dd7d2b043092" (UID: "dfc49388-0761-4c9a-b28e-dd7d2b043092"). InnerVolumeSpecName "kube-api-access-8c4wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:11:35 crc kubenswrapper[4990]: I1205 01:11:35.404916 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/002db936-950d-4f19-b394-a125b435fda5-kube-api-access-zqmrq" (OuterVolumeSpecName: "kube-api-access-zqmrq") pod "002db936-950d-4f19-b394-a125b435fda5" (UID: "002db936-950d-4f19-b394-a125b435fda5"). InnerVolumeSpecName "kube-api-access-zqmrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:11:35 crc kubenswrapper[4990]: I1205 01:11:35.501080 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8c4wc\" (UniqueName: \"kubernetes.io/projected/dfc49388-0761-4c9a-b28e-dd7d2b043092-kube-api-access-8c4wc\") on node \"crc\" DevicePath \"\"" Dec 05 01:11:35 crc kubenswrapper[4990]: I1205 01:11:35.501117 4990 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfc49388-0761-4c9a-b28e-dd7d2b043092-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 01:11:35 crc kubenswrapper[4990]: I1205 01:11:35.501134 4990 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/002db936-950d-4f19-b394-a125b435fda5-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 01:11:35 crc kubenswrapper[4990]: I1205 01:11:35.501148 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqmrq\" (UniqueName: \"kubernetes.io/projected/002db936-950d-4f19-b394-a125b435fda5-kube-api-access-zqmrq\") on node \"crc\" DevicePath \"\"" Dec 05 01:11:35 crc kubenswrapper[4990]: I1205 01:11:35.630706 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/002db936-950d-4f19-b394-a125b435fda5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "002db936-950d-4f19-b394-a125b435fda5" (UID: "002db936-950d-4f19-b394-a125b435fda5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:11:35 crc kubenswrapper[4990]: I1205 01:11:35.703866 4990 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/002db936-950d-4f19-b394-a125b435fda5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 01:11:36 crc kubenswrapper[4990]: I1205 01:11:36.120061 4990 generic.go:334] "Generic (PLEG): container finished" podID="4a20aeaa-57ed-4f60-88b1-d8b88126048e" containerID="32f91addc0299ac3833b6bfd3ca94c5f53cda988d627eeb7afe47fda6312c562" exitCode=0 Dec 05 01:11:36 crc kubenswrapper[4990]: I1205 01:11:36.120144 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4a20aeaa-57ed-4f60-88b1-d8b88126048e","Type":"ContainerDied","Data":"32f91addc0299ac3833b6bfd3ca94c5f53cda988d627eeb7afe47fda6312c562"} Dec 05 01:11:36 crc kubenswrapper[4990]: I1205 01:11:36.123545 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7k6p5" event={"ID":"002db936-950d-4f19-b394-a125b435fda5","Type":"ContainerDied","Data":"13414b2d644fb252c38d51638add2a5f27fdbc2f76871452b8f675142544b997"} Dec 05 01:11:36 crc kubenswrapper[4990]: I1205 01:11:36.123583 4990 scope.go:117] "RemoveContainer" containerID="6bf1ceeebf59753f7a97a5599d9dab2e065c2c6cfb4ccf02409843f14ba5e505" Dec 05 01:11:36 crc kubenswrapper[4990]: I1205 01:11:36.123596 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7k6p5" Dec 05 01:11:36 crc kubenswrapper[4990]: I1205 01:11:36.127102 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bs899" event={"ID":"dfc49388-0761-4c9a-b28e-dd7d2b043092","Type":"ContainerDied","Data":"923db79caf2dd58b23c72ca10fba11e8dcbbe35e6c8da860660e73ff1f5e77bb"} Dec 05 01:11:36 crc kubenswrapper[4990]: I1205 01:11:36.127217 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bs899" Dec 05 01:11:36 crc kubenswrapper[4990]: I1205 01:11:36.162193 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7k6p5"] Dec 05 01:11:36 crc kubenswrapper[4990]: I1205 01:11:36.165857 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7k6p5"] Dec 05 01:11:36 crc kubenswrapper[4990]: I1205 01:11:36.599464 4990 scope.go:117] "RemoveContainer" containerID="ac700baa991b5f21571aa2f73b1518846ba81d29e95e7ac151eb966fddba2527" Dec 05 01:11:36 crc kubenswrapper[4990]: I1205 01:11:36.612333 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfc49388-0761-4c9a-b28e-dd7d2b043092-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dfc49388-0761-4c9a-b28e-dd7d2b043092" (UID: "dfc49388-0761-4c9a-b28e-dd7d2b043092"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:11:36 crc kubenswrapper[4990]: I1205 01:11:36.622118 4990 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfc49388-0761-4c9a-b28e-dd7d2b043092-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 01:11:36 crc kubenswrapper[4990]: I1205 01:11:36.761450 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bs899"] Dec 05 01:11:36 crc kubenswrapper[4990]: I1205 01:11:36.769815 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bs899"] Dec 05 01:11:36 crc kubenswrapper[4990]: I1205 01:11:36.950624 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 05 01:11:36 crc kubenswrapper[4990]: E1205 01:11:36.950883 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="002db936-950d-4f19-b394-a125b435fda5" containerName="extract-utilities" Dec 05 01:11:36 crc kubenswrapper[4990]: I1205 01:11:36.950896 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="002db936-950d-4f19-b394-a125b435fda5" containerName="extract-utilities" Dec 05 01:11:36 crc kubenswrapper[4990]: E1205 01:11:36.950905 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfc49388-0761-4c9a-b28e-dd7d2b043092" containerName="extract-utilities" Dec 05 01:11:36 crc kubenswrapper[4990]: I1205 01:11:36.950911 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfc49388-0761-4c9a-b28e-dd7d2b043092" containerName="extract-utilities" Dec 05 01:11:36 crc kubenswrapper[4990]: E1205 01:11:36.950920 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="002db936-950d-4f19-b394-a125b435fda5" containerName="extract-content" Dec 05 01:11:36 crc kubenswrapper[4990]: I1205 01:11:36.950927 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="002db936-950d-4f19-b394-a125b435fda5" containerName="extract-content" Dec 05 01:11:36 crc kubenswrapper[4990]: E1205 01:11:36.950937 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfc49388-0761-4c9a-b28e-dd7d2b043092" containerName="registry-server" Dec 05 01:11:36 crc kubenswrapper[4990]: I1205 01:11:36.950943 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfc49388-0761-4c9a-b28e-dd7d2b043092" containerName="registry-server" Dec 05 01:11:36 crc kubenswrapper[4990]: E1205 01:11:36.950954 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfc49388-0761-4c9a-b28e-dd7d2b043092" containerName="extract-content" Dec 05 01:11:36 crc kubenswrapper[4990]: I1205 01:11:36.950961 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfc49388-0761-4c9a-b28e-dd7d2b043092" containerName="extract-content" Dec 05 01:11:36 crc kubenswrapper[4990]: E1205 01:11:36.950970 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="002db936-950d-4f19-b394-a125b435fda5" containerName="registry-server" Dec 05 01:11:36 crc kubenswrapper[4990]: I1205 01:11:36.950976 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="002db936-950d-4f19-b394-a125b435fda5" containerName="registry-server" Dec 05 01:11:36 crc kubenswrapper[4990]: I1205 01:11:36.951062 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="002db936-950d-4f19-b394-a125b435fda5" containerName="registry-server" Dec 05 01:11:36 crc kubenswrapper[4990]: I1205 01:11:36.951074 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfc49388-0761-4c9a-b28e-dd7d2b043092" containerName="registry-server" Dec 05 01:11:36 crc kubenswrapper[4990]: I1205 01:11:36.951704 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 05 01:11:36 crc kubenswrapper[4990]: I1205 01:11:36.964832 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 05 01:11:37 crc kubenswrapper[4990]: I1205 01:11:37.029041 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/62756b9a-f2fe-4305-b031-087a5709d8dc-kube-api-access\") pod \"installer-9-crc\" (UID: \"62756b9a-f2fe-4305-b031-087a5709d8dc\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 01:11:37 crc kubenswrapper[4990]: I1205 01:11:37.029234 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/62756b9a-f2fe-4305-b031-087a5709d8dc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"62756b9a-f2fe-4305-b031-087a5709d8dc\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 01:11:37 crc kubenswrapper[4990]: I1205 01:11:37.029270 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/62756b9a-f2fe-4305-b031-087a5709d8dc-var-lock\") pod \"installer-9-crc\" (UID: \"62756b9a-f2fe-4305-b031-087a5709d8dc\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 01:11:37 crc kubenswrapper[4990]: I1205 01:11:37.130297 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/62756b9a-f2fe-4305-b031-087a5709d8dc-kube-api-access\") pod \"installer-9-crc\" (UID: \"62756b9a-f2fe-4305-b031-087a5709d8dc\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 01:11:37 crc kubenswrapper[4990]: I1205 01:11:37.130364 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/62756b9a-f2fe-4305-b031-087a5709d8dc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"62756b9a-f2fe-4305-b031-087a5709d8dc\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 01:11:37 crc kubenswrapper[4990]: I1205 01:11:37.130390 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/62756b9a-f2fe-4305-b031-087a5709d8dc-var-lock\") pod \"installer-9-crc\" (UID: \"62756b9a-f2fe-4305-b031-087a5709d8dc\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 01:11:37 crc kubenswrapper[4990]: I1205 01:11:37.130467 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/62756b9a-f2fe-4305-b031-087a5709d8dc-var-lock\") pod \"installer-9-crc\" (UID: \"62756b9a-f2fe-4305-b031-087a5709d8dc\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 01:11:37 crc kubenswrapper[4990]: I1205 01:11:37.130524 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/62756b9a-f2fe-4305-b031-087a5709d8dc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"62756b9a-f2fe-4305-b031-087a5709d8dc\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 01:11:37 crc kubenswrapper[4990]: I1205 01:11:37.136151 4990 scope.go:117] "RemoveContainer" containerID="f035754ecced7e6c979df34697149e15a1254e9e1f66eaa658b5615bd8a9d5d3" Dec 05 01:11:37 crc kubenswrapper[4990]: I1205 01:11:37.157353 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/62756b9a-f2fe-4305-b031-087a5709d8dc-kube-api-access\") pod \"installer-9-crc\" (UID: \"62756b9a-f2fe-4305-b031-087a5709d8dc\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 01:11:37 crc kubenswrapper[4990]: I1205 01:11:37.274092 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 05 01:11:37 crc kubenswrapper[4990]: I1205 01:11:37.637385 4990 scope.go:117] "RemoveContainer" containerID="575540128ce1a2496e9c8c078cc195e921136d5865a1c83d2440b0f60e3154f7" Dec 05 01:11:37 crc kubenswrapper[4990]: I1205 01:11:37.677937 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 01:11:37 crc kubenswrapper[4990]: I1205 01:11:37.736534 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a20aeaa-57ed-4f60-88b1-d8b88126048e-kube-api-access\") pod \"4a20aeaa-57ed-4f60-88b1-d8b88126048e\" (UID: \"4a20aeaa-57ed-4f60-88b1-d8b88126048e\") " Dec 05 01:11:37 crc kubenswrapper[4990]: I1205 01:11:37.736722 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4a20aeaa-57ed-4f60-88b1-d8b88126048e-kubelet-dir\") pod \"4a20aeaa-57ed-4f60-88b1-d8b88126048e\" (UID: \"4a20aeaa-57ed-4f60-88b1-d8b88126048e\") " Dec 05 01:11:37 crc kubenswrapper[4990]: I1205 01:11:37.736848 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a20aeaa-57ed-4f60-88b1-d8b88126048e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4a20aeaa-57ed-4f60-88b1-d8b88126048e" (UID: "4a20aeaa-57ed-4f60-88b1-d8b88126048e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:11:37 crc kubenswrapper[4990]: I1205 01:11:37.737006 4990 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4a20aeaa-57ed-4f60-88b1-d8b88126048e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 05 01:11:37 crc kubenswrapper[4990]: I1205 01:11:37.742464 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a20aeaa-57ed-4f60-88b1-d8b88126048e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4a20aeaa-57ed-4f60-88b1-d8b88126048e" (UID: "4a20aeaa-57ed-4f60-88b1-d8b88126048e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:11:37 crc kubenswrapper[4990]: I1205 01:11:37.837891 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a20aeaa-57ed-4f60-88b1-d8b88126048e-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 01:11:37 crc kubenswrapper[4990]: I1205 01:11:37.942303 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="002db936-950d-4f19-b394-a125b435fda5" path="/var/lib/kubelet/pods/002db936-950d-4f19-b394-a125b435fda5/volumes" Dec 05 01:11:37 crc kubenswrapper[4990]: I1205 01:11:37.943549 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfc49388-0761-4c9a-b28e-dd7d2b043092" path="/var/lib/kubelet/pods/dfc49388-0761-4c9a-b28e-dd7d2b043092/volumes" Dec 05 01:11:38 crc kubenswrapper[4990]: I1205 01:11:38.144499 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 01:11:38 crc kubenswrapper[4990]: I1205 01:11:38.144470 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4a20aeaa-57ed-4f60-88b1-d8b88126048e","Type":"ContainerDied","Data":"0c2a7170cc2359e5085eba13b0cee846748980bc7f8ef2fb6a0bf65493423672"} Dec 05 01:11:38 crc kubenswrapper[4990]: I1205 01:11:38.144646 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c2a7170cc2359e5085eba13b0cee846748980bc7f8ef2fb6a0bf65493423672" Dec 05 01:11:44 crc kubenswrapper[4990]: I1205 01:11:44.806570 4990 scope.go:117] "RemoveContainer" containerID="63b3d1069ed0d35a074ddc34871e4d1500b6752db984ce2c7bd79de50df77f60" Dec 05 01:11:44 crc kubenswrapper[4990]: I1205 01:11:44.868321 4990 scope.go:117] "RemoveContainer" containerID="e6961208b1813950aef4bfc4a12f662a8a7c271ddca85337d812e7c38dd6e295" Dec 05 01:11:45 crc kubenswrapper[4990]: I1205 01:11:45.096227 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 05 01:11:45 crc kubenswrapper[4990]: W1205 01:11:45.109664 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod62756b9a_f2fe_4305_b031_087a5709d8dc.slice/crio-97a5aed892fff817bec784da37b141381f2294dd41f720eb04ca59d987288fc2 WatchSource:0}: Error finding container 97a5aed892fff817bec784da37b141381f2294dd41f720eb04ca59d987288fc2: Status 404 returned error can't find the container with id 97a5aed892fff817bec784da37b141381f2294dd41f720eb04ca59d987288fc2 Dec 05 01:11:45 crc kubenswrapper[4990]: I1205 01:11:45.200492 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6l6n4" event={"ID":"5db06188-8c87-4ce2-a928-97b7ebf55976","Type":"ContainerStarted","Data":"ff2f840fe33c38cbbbf9e2f7259a55b29ac80e1c810e2b271a6c4ae9dacf5b23"} Dec 05 01:11:45 crc kubenswrapper[4990]: I1205 01:11:45.204682 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7vm5c" event={"ID":"4100dc4e-10a0-4d5c-b441-c87e80787d93","Type":"ContainerStarted","Data":"20ffe511e3ade39ea7c0c851875c2eab18afb0409c70fc529ac06613d6858d65"} Dec 05 01:11:45 crc kubenswrapper[4990]: I1205 01:11:45.206216 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"62756b9a-f2fe-4305-b031-087a5709d8dc","Type":"ContainerStarted","Data":"97a5aed892fff817bec784da37b141381f2294dd41f720eb04ca59d987288fc2"} Dec 05 01:11:45 crc kubenswrapper[4990]: I1205 01:11:45.209057 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2fx8c" event={"ID":"d30b3185-30ec-4a3f-a149-1073cd20ee46","Type":"ContainerStarted","Data":"1fff03ba4035755d4802bc173b4c994bb14420a577a4cb3f35dca6c6c437cfb0"} Dec 05 01:11:45 crc kubenswrapper[4990]: I1205 01:11:45.211809 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mg56k" event={"ID":"df8138ec-8df1-4959-90dd-ee4a224c92f8","Type":"ContainerStarted","Data":"b13a26c2d1722c83a15d32fd9f2886f97517535b530f0c9b2e152e1de3a2f9a3"} Dec 05 01:11:46 crc kubenswrapper[4990]: I1205 01:11:46.223334 4990 generic.go:334] "Generic (PLEG): container finished" podID="df8138ec-8df1-4959-90dd-ee4a224c92f8" containerID="b13a26c2d1722c83a15d32fd9f2886f97517535b530f0c9b2e152e1de3a2f9a3" exitCode=0 Dec 05 01:11:46 crc kubenswrapper[4990]: I1205 01:11:46.223383 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mg56k" event={"ID":"df8138ec-8df1-4959-90dd-ee4a224c92f8","Type":"ContainerDied","Data":"b13a26c2d1722c83a15d32fd9f2886f97517535b530f0c9b2e152e1de3a2f9a3"} Dec 05 01:11:46 crc kubenswrapper[4990]: I1205 01:11:46.223862 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mg56k" event={"ID":"df8138ec-8df1-4959-90dd-ee4a224c92f8","Type":"ContainerStarted","Data":"c10460cca96f632974506a9ca70c7b67451ba042fbe8699b563fab9336ca1b67"} Dec 05 01:11:46 crc kubenswrapper[4990]: I1205 01:11:46.228430 4990 generic.go:334] "Generic (PLEG): container finished" podID="5db06188-8c87-4ce2-a928-97b7ebf55976" containerID="ff2f840fe33c38cbbbf9e2f7259a55b29ac80e1c810e2b271a6c4ae9dacf5b23" exitCode=0 Dec 05 01:11:46 crc kubenswrapper[4990]: I1205 01:11:46.228539 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6l6n4" event={"ID":"5db06188-8c87-4ce2-a928-97b7ebf55976","Type":"ContainerDied","Data":"ff2f840fe33c38cbbbf9e2f7259a55b29ac80e1c810e2b271a6c4ae9dacf5b23"} Dec 05 01:11:46 crc kubenswrapper[4990]: I1205 01:11:46.233950 4990 generic.go:334] "Generic (PLEG): container finished" podID="4100dc4e-10a0-4d5c-b441-c87e80787d93" containerID="20ffe511e3ade39ea7c0c851875c2eab18afb0409c70fc529ac06613d6858d65" exitCode=0 Dec 05 01:11:46 crc kubenswrapper[4990]: I1205 01:11:46.234077 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7vm5c" event={"ID":"4100dc4e-10a0-4d5c-b441-c87e80787d93","Type":"ContainerDied","Data":"20ffe511e3ade39ea7c0c851875c2eab18afb0409c70fc529ac06613d6858d65"} Dec 05 01:11:46 crc kubenswrapper[4990]: I1205 01:11:46.241891 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"62756b9a-f2fe-4305-b031-087a5709d8dc","Type":"ContainerStarted","Data":"726e0ec30f4959cd604024a525220f9d9082255086cbc3df8a31ce20a3b40a0b"} Dec 05 01:11:46 crc kubenswrapper[4990]: I1205 01:11:46.255908 4990 generic.go:334] "Generic (PLEG): container finished" podID="d30b3185-30ec-4a3f-a149-1073cd20ee46" containerID="1fff03ba4035755d4802bc173b4c994bb14420a577a4cb3f35dca6c6c437cfb0" exitCode=0 Dec 05 01:11:46 crc kubenswrapper[4990]: I1205 01:11:46.255992 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2fx8c" event={"ID":"d30b3185-30ec-4a3f-a149-1073cd20ee46","Type":"ContainerDied","Data":"1fff03ba4035755d4802bc173b4c994bb14420a577a4cb3f35dca6c6c437cfb0"} Dec 05 01:11:46 crc kubenswrapper[4990]: I1205 01:11:46.290085 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mg56k" podStartSLOduration=2.017800948 podStartE2EDuration="58.290062169s" podCreationTimestamp="2025-12-05 01:10:48 +0000 UTC" firstStartedPulling="2025-12-05 01:10:49.356851849 +0000 UTC m=+147.733067210" lastFinishedPulling="2025-12-05 01:11:45.62911304 +0000 UTC m=+204.005328431" observedRunningTime="2025-12-05 01:11:46.280023839 +0000 UTC m=+204.656239240" watchObservedRunningTime="2025-12-05 01:11:46.290062169 +0000 UTC m=+204.666277530" Dec 05 01:11:46 crc kubenswrapper[4990]: I1205 01:11:46.367410 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=10.367390639 podStartE2EDuration="10.367390639s" podCreationTimestamp="2025-12-05 01:11:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:11:46.364532926 +0000 UTC m=+204.740748287" watchObservedRunningTime="2025-12-05 01:11:46.367390639 +0000 UTC m=+204.743606000" Dec 05 01:11:47 crc kubenswrapper[4990]: I1205 01:11:47.266124 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6l6n4" event={"ID":"5db06188-8c87-4ce2-a928-97b7ebf55976","Type":"ContainerStarted","Data":"c5e240447be93537645a2aa8ea67f4260c76e8c77a81939c0aef5db70e8106ca"} Dec 05 01:11:47 crc kubenswrapper[4990]: I1205 01:11:47.270361 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7vm5c" event={"ID":"4100dc4e-10a0-4d5c-b441-c87e80787d93","Type":"ContainerStarted","Data":"08d6a3ae4ea9ee88069928b52717f6104548705c0345e1007388a53a2c7c0a26"} Dec 05 01:11:47 crc kubenswrapper[4990]: I1205 01:11:47.274061 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2fx8c" event={"ID":"d30b3185-30ec-4a3f-a149-1073cd20ee46","Type":"ContainerStarted","Data":"ed8864f4dec0bff9731726ee1cef9ed7b5f90aa15f7bd750d8e0c7f088dd0ad4"} Dec 05 01:11:47 crc kubenswrapper[4990]: I1205 01:11:47.286214 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6l6n4" podStartSLOduration=3.804705364 podStartE2EDuration="1m1.286190144s" podCreationTimestamp="2025-12-05 01:10:46 +0000 UTC" firstStartedPulling="2025-12-05 01:10:49.366668802 +0000 UTC m=+147.742884163" lastFinishedPulling="2025-12-05 01:11:46.848153582 +0000 UTC m=+205.224368943" observedRunningTime="2025-12-05 01:11:47.284153275 +0000 UTC m=+205.660368636" watchObservedRunningTime="2025-12-05 01:11:47.286190144 +0000 UTC m=+205.662405505" Dec 05 01:11:47 crc kubenswrapper[4990]: I1205 01:11:47.303787 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2fx8c" podStartSLOduration=3.857098067 podStartE2EDuration="1m1.30376295s" podCreationTimestamp="2025-12-05 01:10:46 +0000 UTC" firstStartedPulling="2025-12-05 01:10:49.349537101 +0000 UTC m=+147.725752462" lastFinishedPulling="2025-12-05 01:11:46.796201984 +0000 UTC m=+205.172417345" observedRunningTime="2025-12-05 01:11:47.302620917 +0000 UTC m=+205.678836278" watchObservedRunningTime="2025-12-05 01:11:47.30376295 +0000 UTC m=+205.679978311" Dec 05 01:11:47 crc kubenswrapper[4990]: I1205 01:11:47.325420 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7vm5c" podStartSLOduration=2.813281784 podStartE2EDuration="1m1.325399694s" podCreationTimestamp="2025-12-05 01:10:46 +0000 UTC" firstStartedPulling="2025-12-05 01:10:48.301299861 +0000 UTC m=+146.677515222" lastFinishedPulling="2025-12-05 01:11:46.813417771 +0000 UTC m=+205.189633132" observedRunningTime="2025-12-05 01:11:47.322243063 +0000 UTC m=+205.698458424" watchObservedRunningTime="2025-12-05 01:11:47.325399694 +0000 UTC m=+205.701615055" Dec 05 01:11:48 crc kubenswrapper[4990]: I1205 01:11:48.586461 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mg56k" Dec 05 01:11:48 crc kubenswrapper[4990]: I1205 01:11:48.587173 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mg56k" Dec 05 01:11:48 crc kubenswrapper[4990]: I1205 01:11:48.657418 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mg56k" Dec 05 01:11:51 crc kubenswrapper[4990]: I1205 01:11:51.824387 4990 patch_prober.go:28] interesting pod/machine-config-daemon-zxlh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 01:11:51 crc kubenswrapper[4990]: I1205 01:11:51.824965 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 01:11:51 crc kubenswrapper[4990]: I1205 01:11:51.825088 4990 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" Dec 05 01:11:51 crc kubenswrapper[4990]: I1205 01:11:51.826358 4990 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11"} pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 01:11:51 crc kubenswrapper[4990]: I1205 01:11:51.826631 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" containerID="cri-o://2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11" gracePeriod=600 Dec 05 01:11:53 crc kubenswrapper[4990]: I1205 01:11:53.318734 4990 generic.go:334] "Generic (PLEG): container finished" podID="b6580a04-67de-48f9-9da2-56cb4377af48" containerID="2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11" exitCode=0 Dec 05 01:11:53 crc kubenswrapper[4990]: I1205 01:11:53.318864 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" event={"ID":"b6580a04-67de-48f9-9da2-56cb4377af48","Type":"ContainerDied","Data":"2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11"} Dec 05 01:11:53 crc kubenswrapper[4990]: I1205 01:11:53.319377 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" event={"ID":"b6580a04-67de-48f9-9da2-56cb4377af48","Type":"ContainerStarted","Data":"b40660c456587b8dd05170b85828fefb6a3f2c0ef31ac80948416bdfddfbcfec"} Dec 05 01:11:56 crc kubenswrapper[4990]: I1205 01:11:56.748551 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7vm5c" Dec 05 01:11:56 crc kubenswrapper[4990]: I1205 01:11:56.749545 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7vm5c" Dec 05 01:11:56 crc kubenswrapper[4990]: I1205 01:11:56.818808 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7vm5c" Dec 05 01:11:56 crc kubenswrapper[4990]: I1205 01:11:56.985758 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6l6n4" Dec 05 01:11:56 crc kubenswrapper[4990]: I1205 01:11:56.985831 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6l6n4" Dec 05 01:11:57 crc kubenswrapper[4990]: I1205 01:11:57.053632 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6l6n4" Dec 05 01:11:57 crc kubenswrapper[4990]: I1205 01:11:57.172718 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2fx8c" Dec 05 01:11:57 crc kubenswrapper[4990]: I1205 01:11:57.172784 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2fx8c" Dec 05 01:11:57 crc kubenswrapper[4990]: I1205 01:11:57.230014 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2fx8c" Dec 05 01:11:57 crc kubenswrapper[4990]: I1205 01:11:57.408150 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2fx8c" Dec 05 01:11:57 crc kubenswrapper[4990]: I1205 01:11:57.414981 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6l6n4" Dec 05 01:11:57 crc kubenswrapper[4990]: I1205 01:11:57.417375 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7vm5c" Dec 05 01:11:58 crc kubenswrapper[4990]: I1205 01:11:58.465809 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2fx8c"] Dec 05 01:11:58 crc kubenswrapper[4990]: I1205 01:11:58.636941 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mg56k" Dec 05 01:11:59 crc kubenswrapper[4990]: I1205 01:11:59.359017 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2fx8c" podUID="d30b3185-30ec-4a3f-a149-1073cd20ee46" containerName="registry-server" containerID="cri-o://ed8864f4dec0bff9731726ee1cef9ed7b5f90aa15f7bd750d8e0c7f088dd0ad4" gracePeriod=2 Dec 05 01:11:59 crc kubenswrapper[4990]: I1205 01:11:59.866219 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6l6n4"] Dec 05 01:11:59 crc kubenswrapper[4990]: I1205 01:11:59.867051 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6l6n4" podUID="5db06188-8c87-4ce2-a928-97b7ebf55976" containerName="registry-server" containerID="cri-o://c5e240447be93537645a2aa8ea67f4260c76e8c77a81939c0aef5db70e8106ca" gracePeriod=2 Dec 05 01:11:59 crc kubenswrapper[4990]: I1205 01:11:59.876095 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2fx8c" Dec 05 01:11:59 crc kubenswrapper[4990]: I1205 01:11:59.984713 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d30b3185-30ec-4a3f-a149-1073cd20ee46-utilities\") pod \"d30b3185-30ec-4a3f-a149-1073cd20ee46\" (UID: \"d30b3185-30ec-4a3f-a149-1073cd20ee46\") " Dec 05 01:11:59 crc kubenswrapper[4990]: I1205 01:11:59.984816 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5j7r\" (UniqueName: \"kubernetes.io/projected/d30b3185-30ec-4a3f-a149-1073cd20ee46-kube-api-access-z5j7r\") pod \"d30b3185-30ec-4a3f-a149-1073cd20ee46\" (UID: \"d30b3185-30ec-4a3f-a149-1073cd20ee46\") " Dec 05 01:11:59 crc kubenswrapper[4990]: I1205 01:11:59.984864 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d30b3185-30ec-4a3f-a149-1073cd20ee46-catalog-content\") pod \"d30b3185-30ec-4a3f-a149-1073cd20ee46\" (UID: \"d30b3185-30ec-4a3f-a149-1073cd20ee46\") " Dec 05 01:11:59 crc kubenswrapper[4990]: I1205 01:11:59.985950 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d30b3185-30ec-4a3f-a149-1073cd20ee46-utilities" (OuterVolumeSpecName: "utilities") pod "d30b3185-30ec-4a3f-a149-1073cd20ee46" (UID: "d30b3185-30ec-4a3f-a149-1073cd20ee46"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:11:59 crc kubenswrapper[4990]: I1205 01:11:59.992225 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d30b3185-30ec-4a3f-a149-1073cd20ee46-kube-api-access-z5j7r" (OuterVolumeSpecName: "kube-api-access-z5j7r") pod "d30b3185-30ec-4a3f-a149-1073cd20ee46" (UID: "d30b3185-30ec-4a3f-a149-1073cd20ee46"). InnerVolumeSpecName "kube-api-access-z5j7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:12:00 crc kubenswrapper[4990]: I1205 01:12:00.052127 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d30b3185-30ec-4a3f-a149-1073cd20ee46-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d30b3185-30ec-4a3f-a149-1073cd20ee46" (UID: "d30b3185-30ec-4a3f-a149-1073cd20ee46"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:12:00 crc kubenswrapper[4990]: I1205 01:12:00.086439 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5j7r\" (UniqueName: \"kubernetes.io/projected/d30b3185-30ec-4a3f-a149-1073cd20ee46-kube-api-access-z5j7r\") on node \"crc\" DevicePath \"\"" Dec 05 01:12:00 crc kubenswrapper[4990]: I1205 01:12:00.086472 4990 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d30b3185-30ec-4a3f-a149-1073cd20ee46-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 01:12:00 crc kubenswrapper[4990]: I1205 01:12:00.086514 4990 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d30b3185-30ec-4a3f-a149-1073cd20ee46-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 01:12:00 crc kubenswrapper[4990]: I1205 01:12:00.373048 4990 generic.go:334] "Generic (PLEG): container finished" podID="5db06188-8c87-4ce2-a928-97b7ebf55976" containerID="c5e240447be93537645a2aa8ea67f4260c76e8c77a81939c0aef5db70e8106ca" exitCode=0 Dec 05 01:12:00 crc kubenswrapper[4990]: I1205 01:12:00.373134 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6l6n4" event={"ID":"5db06188-8c87-4ce2-a928-97b7ebf55976","Type":"ContainerDied","Data":"c5e240447be93537645a2aa8ea67f4260c76e8c77a81939c0aef5db70e8106ca"} Dec 05 01:12:00 crc kubenswrapper[4990]: I1205 01:12:00.377567 4990 generic.go:334] "Generic (PLEG): container finished" podID="d30b3185-30ec-4a3f-a149-1073cd20ee46" containerID="ed8864f4dec0bff9731726ee1cef9ed7b5f90aa15f7bd750d8e0c7f088dd0ad4" exitCode=0 Dec 05 01:12:00 crc kubenswrapper[4990]: I1205 01:12:00.377627 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2fx8c" event={"ID":"d30b3185-30ec-4a3f-a149-1073cd20ee46","Type":"ContainerDied","Data":"ed8864f4dec0bff9731726ee1cef9ed7b5f90aa15f7bd750d8e0c7f088dd0ad4"} Dec 05 01:12:00 crc kubenswrapper[4990]: I1205 01:12:00.377676 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2fx8c" event={"ID":"d30b3185-30ec-4a3f-a149-1073cd20ee46","Type":"ContainerDied","Data":"fd97dc0481a2a66515f3ab547a2e5f0233700347f90789c8d9902a4ccbcfddbb"} Dec 05 01:12:00 crc kubenswrapper[4990]: I1205 01:12:00.377709 4990 scope.go:117] "RemoveContainer" containerID="ed8864f4dec0bff9731726ee1cef9ed7b5f90aa15f7bd750d8e0c7f088dd0ad4" Dec 05 01:12:00 crc kubenswrapper[4990]: I1205 01:12:00.377713 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2fx8c" Dec 05 01:12:00 crc kubenswrapper[4990]: I1205 01:12:00.401446 4990 scope.go:117] "RemoveContainer" containerID="1fff03ba4035755d4802bc173b4c994bb14420a577a4cb3f35dca6c6c437cfb0" Dec 05 01:12:00 crc kubenswrapper[4990]: I1205 01:12:00.425455 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2fx8c"] Dec 05 01:12:00 crc kubenswrapper[4990]: I1205 01:12:00.430922 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2fx8c"] Dec 05 01:12:00 crc kubenswrapper[4990]: I1205 01:12:00.458283 4990 scope.go:117] "RemoveContainer" containerID="2c6505016510cb04dd726bd2868b4971b07edef142e01b6fd65e5ac648b3f123" Dec 05 01:12:00 crc kubenswrapper[4990]: I1205 01:12:00.482623 4990 scope.go:117] "RemoveContainer" containerID="ed8864f4dec0bff9731726ee1cef9ed7b5f90aa15f7bd750d8e0c7f088dd0ad4" Dec 05 01:12:00 crc kubenswrapper[4990]: E1205 01:12:00.483341 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed8864f4dec0bff9731726ee1cef9ed7b5f90aa15f7bd750d8e0c7f088dd0ad4\": container with ID starting with ed8864f4dec0bff9731726ee1cef9ed7b5f90aa15f7bd750d8e0c7f088dd0ad4 not found: ID does not exist" containerID="ed8864f4dec0bff9731726ee1cef9ed7b5f90aa15f7bd750d8e0c7f088dd0ad4" Dec 05 01:12:00 crc kubenswrapper[4990]: I1205 01:12:00.483380 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed8864f4dec0bff9731726ee1cef9ed7b5f90aa15f7bd750d8e0c7f088dd0ad4"} err="failed to get container status \"ed8864f4dec0bff9731726ee1cef9ed7b5f90aa15f7bd750d8e0c7f088dd0ad4\": rpc error: code = NotFound desc = could not find container \"ed8864f4dec0bff9731726ee1cef9ed7b5f90aa15f7bd750d8e0c7f088dd0ad4\": container with ID starting with ed8864f4dec0bff9731726ee1cef9ed7b5f90aa15f7bd750d8e0c7f088dd0ad4 not found: ID does not exist" Dec 05 01:12:00 crc kubenswrapper[4990]: I1205 01:12:00.483407 4990 scope.go:117] "RemoveContainer" containerID="1fff03ba4035755d4802bc173b4c994bb14420a577a4cb3f35dca6c6c437cfb0" Dec 05 01:12:00 crc kubenswrapper[4990]: E1205 01:12:00.483804 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fff03ba4035755d4802bc173b4c994bb14420a577a4cb3f35dca6c6c437cfb0\": container with ID starting with 1fff03ba4035755d4802bc173b4c994bb14420a577a4cb3f35dca6c6c437cfb0 not found: ID does not exist" containerID="1fff03ba4035755d4802bc173b4c994bb14420a577a4cb3f35dca6c6c437cfb0" Dec 05 01:12:00 crc kubenswrapper[4990]: I1205 01:12:00.483851 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fff03ba4035755d4802bc173b4c994bb14420a577a4cb3f35dca6c6c437cfb0"} err="failed to get container status \"1fff03ba4035755d4802bc173b4c994bb14420a577a4cb3f35dca6c6c437cfb0\": rpc error: code = NotFound desc = could not find container \"1fff03ba4035755d4802bc173b4c994bb14420a577a4cb3f35dca6c6c437cfb0\": container with ID starting with 1fff03ba4035755d4802bc173b4c994bb14420a577a4cb3f35dca6c6c437cfb0 not found: ID does not exist" Dec 05 01:12:00 crc kubenswrapper[4990]: I1205 01:12:00.483870 4990 scope.go:117] "RemoveContainer" containerID="2c6505016510cb04dd726bd2868b4971b07edef142e01b6fd65e5ac648b3f123" Dec 05 01:12:00 crc kubenswrapper[4990]: E1205 01:12:00.484205 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c6505016510cb04dd726bd2868b4971b07edef142e01b6fd65e5ac648b3f123\": container with ID starting with 2c6505016510cb04dd726bd2868b4971b07edef142e01b6fd65e5ac648b3f123 not found: ID does not exist" containerID="2c6505016510cb04dd726bd2868b4971b07edef142e01b6fd65e5ac648b3f123" Dec 05 01:12:00 crc kubenswrapper[4990]: I1205 01:12:00.484292 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c6505016510cb04dd726bd2868b4971b07edef142e01b6fd65e5ac648b3f123"} err="failed to get container status \"2c6505016510cb04dd726bd2868b4971b07edef142e01b6fd65e5ac648b3f123\": rpc error: code = NotFound desc = could not find container \"2c6505016510cb04dd726bd2868b4971b07edef142e01b6fd65e5ac648b3f123\": container with ID starting with 2c6505016510cb04dd726bd2868b4971b07edef142e01b6fd65e5ac648b3f123 not found: ID does not exist" Dec 05 01:12:00 crc kubenswrapper[4990]: I1205 01:12:00.722426 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6l6n4" Dec 05 01:12:00 crc kubenswrapper[4990]: I1205 01:12:00.896223 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5db06188-8c87-4ce2-a928-97b7ebf55976-catalog-content\") pod \"5db06188-8c87-4ce2-a928-97b7ebf55976\" (UID: \"5db06188-8c87-4ce2-a928-97b7ebf55976\") " Dec 05 01:12:00 crc kubenswrapper[4990]: I1205 01:12:00.896334 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5db06188-8c87-4ce2-a928-97b7ebf55976-utilities\") pod \"5db06188-8c87-4ce2-a928-97b7ebf55976\" (UID: \"5db06188-8c87-4ce2-a928-97b7ebf55976\") " Dec 05 01:12:00 crc kubenswrapper[4990]: I1205 01:12:00.896403 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjv5p\" (UniqueName: \"kubernetes.io/projected/5db06188-8c87-4ce2-a928-97b7ebf55976-kube-api-access-gjv5p\") pod \"5db06188-8c87-4ce2-a928-97b7ebf55976\" (UID: \"5db06188-8c87-4ce2-a928-97b7ebf55976\") " Dec 05 01:12:00 crc kubenswrapper[4990]: I1205 01:12:00.897820 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5db06188-8c87-4ce2-a928-97b7ebf55976-utilities" (OuterVolumeSpecName: "utilities") pod "5db06188-8c87-4ce2-a928-97b7ebf55976" (UID: "5db06188-8c87-4ce2-a928-97b7ebf55976"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:12:00 crc kubenswrapper[4990]: I1205 01:12:00.902525 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5db06188-8c87-4ce2-a928-97b7ebf55976-kube-api-access-gjv5p" (OuterVolumeSpecName: "kube-api-access-gjv5p") pod "5db06188-8c87-4ce2-a928-97b7ebf55976" (UID: "5db06188-8c87-4ce2-a928-97b7ebf55976"). InnerVolumeSpecName "kube-api-access-gjv5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:12:00 crc kubenswrapper[4990]: I1205 01:12:00.983061 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5db06188-8c87-4ce2-a928-97b7ebf55976-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5db06188-8c87-4ce2-a928-97b7ebf55976" (UID: "5db06188-8c87-4ce2-a928-97b7ebf55976"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:12:01 crc kubenswrapper[4990]: I1205 01:12:00.999801 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjv5p\" (UniqueName: \"kubernetes.io/projected/5db06188-8c87-4ce2-a928-97b7ebf55976-kube-api-access-gjv5p\") on node \"crc\" DevicePath \"\"" Dec 05 01:12:01 crc kubenswrapper[4990]: I1205 01:12:00.999861 4990 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5db06188-8c87-4ce2-a928-97b7ebf55976-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 01:12:01 crc kubenswrapper[4990]: I1205 01:12:00.999886 4990 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5db06188-8c87-4ce2-a928-97b7ebf55976-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 01:12:01 crc kubenswrapper[4990]: I1205 01:12:01.387193 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6l6n4" event={"ID":"5db06188-8c87-4ce2-a928-97b7ebf55976","Type":"ContainerDied","Data":"a38182b01c3cee5e1900ffd63dfd5caa8f3537cc8b30be1b9904d7d140239d57"} Dec 05 01:12:01 crc kubenswrapper[4990]: I1205 01:12:01.387337 4990 scope.go:117] "RemoveContainer" containerID="c5e240447be93537645a2aa8ea67f4260c76e8c77a81939c0aef5db70e8106ca" Dec 05 01:12:01 crc kubenswrapper[4990]: I1205 01:12:01.387363 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6l6n4" Dec 05 01:12:01 crc kubenswrapper[4990]: I1205 01:12:01.411556 4990 scope.go:117] "RemoveContainer" containerID="ff2f840fe33c38cbbbf9e2f7259a55b29ac80e1c810e2b271a6c4ae9dacf5b23" Dec 05 01:12:01 crc kubenswrapper[4990]: I1205 01:12:01.421518 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6l6n4"] Dec 05 01:12:01 crc kubenswrapper[4990]: I1205 01:12:01.426712 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6l6n4"] Dec 05 01:12:01 crc kubenswrapper[4990]: I1205 01:12:01.448398 4990 scope.go:117] "RemoveContainer" containerID="3dc94067903c76be84eaae0fe6d50c63ceaa39c30e1ebd5bd369be21d034c905" Dec 05 01:12:01 crc kubenswrapper[4990]: I1205 01:12:01.938691 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5db06188-8c87-4ce2-a928-97b7ebf55976" path="/var/lib/kubelet/pods/5db06188-8c87-4ce2-a928-97b7ebf55976/volumes" Dec 05 01:12:01 crc kubenswrapper[4990]: I1205 01:12:01.939975 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d30b3185-30ec-4a3f-a149-1073cd20ee46" path="/var/lib/kubelet/pods/d30b3185-30ec-4a3f-a149-1073cd20ee46/volumes" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.511633 4990 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 05 01:12:23 crc kubenswrapper[4990]: E1205 01:12:23.512734 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a20aeaa-57ed-4f60-88b1-d8b88126048e" containerName="pruner" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.512763 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a20aeaa-57ed-4f60-88b1-d8b88126048e" containerName="pruner" Dec 05 01:12:23 crc kubenswrapper[4990]: E1205 01:12:23.512789 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db06188-8c87-4ce2-a928-97b7ebf55976" containerName="extract-utilities" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.512806 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db06188-8c87-4ce2-a928-97b7ebf55976" containerName="extract-utilities" Dec 05 01:12:23 crc kubenswrapper[4990]: E1205 01:12:23.512835 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d30b3185-30ec-4a3f-a149-1073cd20ee46" containerName="extract-utilities" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.512851 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="d30b3185-30ec-4a3f-a149-1073cd20ee46" containerName="extract-utilities" Dec 05 01:12:23 crc kubenswrapper[4990]: E1205 01:12:23.512874 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d30b3185-30ec-4a3f-a149-1073cd20ee46" containerName="registry-server" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.512889 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="d30b3185-30ec-4a3f-a149-1073cd20ee46" containerName="registry-server" Dec 05 01:12:23 crc kubenswrapper[4990]: E1205 01:12:23.512915 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db06188-8c87-4ce2-a928-97b7ebf55976" containerName="extract-content" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.512931 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db06188-8c87-4ce2-a928-97b7ebf55976" containerName="extract-content" Dec 05 01:12:23 crc kubenswrapper[4990]: E1205 01:12:23.512950 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d30b3185-30ec-4a3f-a149-1073cd20ee46" containerName="extract-content" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.512966 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="d30b3185-30ec-4a3f-a149-1073cd20ee46" containerName="extract-content" Dec 05 01:12:23 crc kubenswrapper[4990]: E1205 01:12:23.512990 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db06188-8c87-4ce2-a928-97b7ebf55976" containerName="registry-server" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.513006 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db06188-8c87-4ce2-a928-97b7ebf55976" containerName="registry-server" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.513231 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db06188-8c87-4ce2-a928-97b7ebf55976" containerName="registry-server" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.513260 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="d30b3185-30ec-4a3f-a149-1073cd20ee46" containerName="registry-server" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.513292 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a20aeaa-57ed-4f60-88b1-d8b88126048e" containerName="pruner" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.514000 4990 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.514232 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.514606 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051" gracePeriod=15 Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.514761 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6" gracePeriod=15 Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.514918 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217" gracePeriod=15 Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.514783 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c" gracePeriod=15 Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.514806 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a" gracePeriod=15 Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.515218 4990 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 05 01:12:23 crc kubenswrapper[4990]: E1205 01:12:23.515578 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.515598 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 05 01:12:23 crc kubenswrapper[4990]: E1205 01:12:23.515617 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.515631 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 05 01:12:23 crc kubenswrapper[4990]: E1205 01:12:23.515653 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.515667 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 05 01:12:23 crc kubenswrapper[4990]: E1205 01:12:23.515689 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.515702 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 05 01:12:23 crc kubenswrapper[4990]: E1205 01:12:23.515720 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.515732 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 05 01:12:23 crc kubenswrapper[4990]: E1205 01:12:23.515751 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.515766 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.516017 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.516046 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.516070 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.516091 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.516111 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.516124 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 05 01:12:23 crc kubenswrapper[4990]: E1205 01:12:23.516307 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.516322 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 05 01:12:23 crc kubenswrapper[4990]: E1205 01:12:23.576243 4990 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.145:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.620933 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.621022 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.621056 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.621115 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.621251 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.621344 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.621424 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.621453 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.722823 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.722911 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.722922 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.722962 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.723002 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.723033 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.723044 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.723118 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.723197 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.723218 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.723282 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.723330 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.723349 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.723380 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.723398 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.723439 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.726414 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.728133 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.728819 4990 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217" exitCode=0 Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.728850 4990 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6" exitCode=0 Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.728865 4990 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c" exitCode=0 Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.728881 4990 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a" exitCode=2 Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.728956 4990 scope.go:117] "RemoveContainer" containerID="97e6b2bbd23c1e187e0c0cc7feb8ebbea136de3f9d49003434e82cf9a99bf106" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.730834 4990 generic.go:334] "Generic (PLEG): container finished" podID="62756b9a-f2fe-4305-b031-087a5709d8dc" containerID="726e0ec30f4959cd604024a525220f9d9082255086cbc3df8a31ce20a3b40a0b" exitCode=0 Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.730868 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"62756b9a-f2fe-4305-b031-087a5709d8dc","Type":"ContainerDied","Data":"726e0ec30f4959cd604024a525220f9d9082255086cbc3df8a31ce20a3b40a0b"} Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.731838 4990 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.732557 4990 status_manager.go:851] "Failed to get status for pod" podUID="62756b9a-f2fe-4305-b031-087a5709d8dc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 05 01:12:23 crc kubenswrapper[4990]: I1205 01:12:23.878398 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 01:12:23 crc kubenswrapper[4990]: W1205 01:12:23.909279 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-adb61b87244946504a354683299cb5dc3397c23c6233512efc7f1b5797d72594 WatchSource:0}: Error finding container adb61b87244946504a354683299cb5dc3397c23c6233512efc7f1b5797d72594: Status 404 returned error can't find the container with id adb61b87244946504a354683299cb5dc3397c23c6233512efc7f1b5797d72594 Dec 05 01:12:23 crc kubenswrapper[4990]: E1205 01:12:23.913775 4990 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.145:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187e2c9a000792ec openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-05 01:12:23.912895212 +0000 UTC m=+242.289110613,LastTimestamp:2025-12-05 01:12:23.912895212 +0000 UTC m=+242.289110613,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 05 01:12:24 crc kubenswrapper[4990]: I1205 01:12:24.740623 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"989ea35522a42ed149fd8ebc196e11317d5acf37dbf2feff31db5ee3baa0e1bd"} Dec 05 01:12:24 crc kubenswrapper[4990]: I1205 01:12:24.741125 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"adb61b87244946504a354683299cb5dc3397c23c6233512efc7f1b5797d72594"} Dec 05 01:12:24 crc kubenswrapper[4990]: E1205 01:12:24.742090 4990 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.145:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 01:12:24 crc kubenswrapper[4990]: I1205 01:12:24.742223 4990 status_manager.go:851] "Failed to get status for pod" podUID="62756b9a-f2fe-4305-b031-087a5709d8dc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 05 01:12:24 crc kubenswrapper[4990]: I1205 01:12:24.746415 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 05 01:12:25 crc kubenswrapper[4990]: I1205 01:12:25.064146 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 05 01:12:25 crc kubenswrapper[4990]: I1205 01:12:25.065424 4990 status_manager.go:851] "Failed to get status for pod" podUID="62756b9a-f2fe-4305-b031-087a5709d8dc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 05 01:12:25 crc kubenswrapper[4990]: I1205 01:12:25.145552 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/62756b9a-f2fe-4305-b031-087a5709d8dc-kubelet-dir\") pod \"62756b9a-f2fe-4305-b031-087a5709d8dc\" (UID: \"62756b9a-f2fe-4305-b031-087a5709d8dc\") " Dec 05 01:12:25 crc kubenswrapper[4990]: I1205 01:12:25.145623 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/62756b9a-f2fe-4305-b031-087a5709d8dc-var-lock\") pod \"62756b9a-f2fe-4305-b031-087a5709d8dc\" (UID: \"62756b9a-f2fe-4305-b031-087a5709d8dc\") " Dec 05 01:12:25 crc kubenswrapper[4990]: I1205 01:12:25.145713 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/62756b9a-f2fe-4305-b031-087a5709d8dc-kube-api-access\") pod \"62756b9a-f2fe-4305-b031-087a5709d8dc\" (UID: \"62756b9a-f2fe-4305-b031-087a5709d8dc\") " Dec 05 01:12:25 crc kubenswrapper[4990]: I1205 01:12:25.145744 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62756b9a-f2fe-4305-b031-087a5709d8dc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "62756b9a-f2fe-4305-b031-087a5709d8dc" (UID: "62756b9a-f2fe-4305-b031-087a5709d8dc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:12:25 crc kubenswrapper[4990]: I1205 01:12:25.145810 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62756b9a-f2fe-4305-b031-087a5709d8dc-var-lock" (OuterVolumeSpecName: "var-lock") pod "62756b9a-f2fe-4305-b031-087a5709d8dc" (UID: "62756b9a-f2fe-4305-b031-087a5709d8dc"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:12:25 crc kubenswrapper[4990]: I1205 01:12:25.146027 4990 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/62756b9a-f2fe-4305-b031-087a5709d8dc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 05 01:12:25 crc kubenswrapper[4990]: I1205 01:12:25.146045 4990 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/62756b9a-f2fe-4305-b031-087a5709d8dc-var-lock\") on node \"crc\" DevicePath \"\"" Dec 05 01:12:25 crc kubenswrapper[4990]: I1205 01:12:25.155213 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62756b9a-f2fe-4305-b031-087a5709d8dc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "62756b9a-f2fe-4305-b031-087a5709d8dc" (UID: "62756b9a-f2fe-4305-b031-087a5709d8dc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:12:25 crc kubenswrapper[4990]: I1205 01:12:25.247682 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/62756b9a-f2fe-4305-b031-087a5709d8dc-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 01:12:25 crc kubenswrapper[4990]: E1205 01:12:25.571124 4990 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.145:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187e2c9a000792ec openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-05 01:12:23.912895212 +0000 UTC m=+242.289110613,LastTimestamp:2025-12-05 01:12:23.912895212 +0000 UTC m=+242.289110613,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 05 01:12:25 crc kubenswrapper[4990]: I1205 01:12:25.753446 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"62756b9a-f2fe-4305-b031-087a5709d8dc","Type":"ContainerDied","Data":"97a5aed892fff817bec784da37b141381f2294dd41f720eb04ca59d987288fc2"} Dec 05 01:12:25 crc kubenswrapper[4990]: I1205 01:12:25.753510 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97a5aed892fff817bec784da37b141381f2294dd41f720eb04ca59d987288fc2" Dec 05 01:12:25 crc kubenswrapper[4990]: I1205 01:12:25.753585 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 05 01:12:25 crc kubenswrapper[4990]: I1205 01:12:25.823759 4990 status_manager.go:851] "Failed to get status for pod" podUID="62756b9a-f2fe-4305-b031-087a5709d8dc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 05 01:12:25 crc kubenswrapper[4990]: I1205 01:12:25.881776 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 05 01:12:25 crc kubenswrapper[4990]: I1205 01:12:25.883528 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:12:25 crc kubenswrapper[4990]: I1205 01:12:25.884960 4990 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 05 01:12:25 crc kubenswrapper[4990]: I1205 01:12:25.885621 4990 status_manager.go:851] "Failed to get status for pod" podUID="62756b9a-f2fe-4305-b031-087a5709d8dc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 05 01:12:25 crc kubenswrapper[4990]: I1205 01:12:25.958090 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 05 01:12:25 crc kubenswrapper[4990]: I1205 01:12:25.958202 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:12:25 crc kubenswrapper[4990]: I1205 01:12:25.958226 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 05 01:12:25 crc kubenswrapper[4990]: I1205 01:12:25.958268 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:12:25 crc kubenswrapper[4990]: I1205 01:12:25.958370 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 05 01:12:25 crc kubenswrapper[4990]: I1205 01:12:25.958509 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:12:25 crc kubenswrapper[4990]: I1205 01:12:25.958886 4990 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 05 01:12:25 crc kubenswrapper[4990]: I1205 01:12:25.958922 4990 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 05 01:12:25 crc kubenswrapper[4990]: I1205 01:12:25.958942 4990 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 05 01:12:26 crc kubenswrapper[4990]: E1205 01:12:26.643244 4990 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 05 01:12:26 crc kubenswrapper[4990]: E1205 01:12:26.643839 4990 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 05 01:12:26 crc kubenswrapper[4990]: E1205 01:12:26.644291 4990 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 05 01:12:26 crc kubenswrapper[4990]: E1205 01:12:26.644592 4990 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 05 01:12:26 crc kubenswrapper[4990]: E1205 01:12:26.644883 4990 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 05 01:12:26 crc kubenswrapper[4990]: I1205 01:12:26.644912 4990 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 05 01:12:26 crc kubenswrapper[4990]: E1205 01:12:26.645179 4990 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="200ms" Dec 05 01:12:26 crc kubenswrapper[4990]: I1205 01:12:26.767391 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 05 01:12:26 crc kubenswrapper[4990]: I1205 01:12:26.768760 4990 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051" exitCode=0 Dec 05 01:12:26 crc kubenswrapper[4990]: I1205 01:12:26.769219 4990 scope.go:117] "RemoveContainer" containerID="3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217" Dec 05 01:12:26 crc kubenswrapper[4990]: I1205 01:12:26.769743 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:12:26 crc kubenswrapper[4990]: I1205 01:12:26.771247 4990 status_manager.go:851] "Failed to get status for pod" podUID="62756b9a-f2fe-4305-b031-087a5709d8dc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 05 01:12:26 crc kubenswrapper[4990]: I1205 01:12:26.771670 4990 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 05 01:12:26 crc kubenswrapper[4990]: I1205 01:12:26.805579 4990 scope.go:117] "RemoveContainer" containerID="1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6" Dec 05 01:12:26 crc kubenswrapper[4990]: I1205 01:12:26.806262 4990 status_manager.go:851] "Failed to get status for pod" podUID="62756b9a-f2fe-4305-b031-087a5709d8dc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 05 01:12:26 crc kubenswrapper[4990]: I1205 01:12:26.806631 4990 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 05 01:12:26 crc kubenswrapper[4990]: I1205 01:12:26.831001 4990 scope.go:117] "RemoveContainer" containerID="7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c" Dec 05 01:12:26 crc kubenswrapper[4990]: E1205 01:12:26.847114 4990 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="400ms" Dec 05 01:12:26 crc kubenswrapper[4990]: I1205 01:12:26.856563 4990 scope.go:117] "RemoveContainer" containerID="410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a" Dec 05 01:12:26 crc kubenswrapper[4990]: I1205 01:12:26.884174 4990 scope.go:117] "RemoveContainer" containerID="433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051" Dec 05 01:12:26 crc kubenswrapper[4990]: I1205 01:12:26.911567 4990 scope.go:117] "RemoveContainer" containerID="17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d" Dec 05 01:12:26 crc kubenswrapper[4990]: I1205 01:12:26.945166 4990 scope.go:117] "RemoveContainer" containerID="3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217" Dec 05 01:12:26 crc kubenswrapper[4990]: E1205 01:12:26.945839 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\": container with ID starting with 3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217 not found: ID does not exist" containerID="3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217" Dec 05 01:12:26 crc kubenswrapper[4990]: I1205 01:12:26.945907 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217"} err="failed to get container status \"3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\": rpc error: code = NotFound desc = could not find container \"3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217\": container with ID starting with 3b92a6333ff682555e6af2b67f62930a9af16bbf42f9998becb8f67650122217 not found: ID does not exist" Dec 05 01:12:26 crc kubenswrapper[4990]: I1205 01:12:26.945950 4990 scope.go:117] "RemoveContainer" containerID="1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6" Dec 05 01:12:26 crc kubenswrapper[4990]: E1205 01:12:26.946523 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\": container with ID starting with 1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6 not found: ID does not exist" containerID="1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6" Dec 05 01:12:26 crc kubenswrapper[4990]: I1205 01:12:26.946596 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6"} err="failed to get container status \"1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\": rpc error: code = NotFound desc = could not find container \"1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6\": container with ID starting with 1e747b68a8c3c29f6c4b3452296892d9a27f9190e94bb1497431a48cd211e8b6 not found: ID does not exist" Dec 05 01:12:26 crc kubenswrapper[4990]: I1205 01:12:26.946657 4990 scope.go:117] "RemoveContainer" containerID="7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c" Dec 05 01:12:26 crc kubenswrapper[4990]: E1205 01:12:26.947657 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\": container with ID starting with 7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c not found: ID does not exist" containerID="7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c" Dec 05 01:12:26 crc kubenswrapper[4990]: I1205 01:12:26.947717 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c"} err="failed to get container status \"7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\": rpc error: code = NotFound desc = could not find container \"7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c\": container with ID starting with 7990e8a949f84ca71023f79822f5b860623876c5f64565e9d86843c1ffefae2c not found: ID does not exist" Dec 05 01:12:26 crc kubenswrapper[4990]: I1205 01:12:26.947759 4990 scope.go:117] "RemoveContainer" containerID="410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a" Dec 05 01:12:26 crc kubenswrapper[4990]: E1205 01:12:26.948070 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\": container with ID starting with 410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a not found: ID does not exist" containerID="410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a" Dec 05 01:12:26 crc kubenswrapper[4990]: I1205 01:12:26.948124 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a"} err="failed to get container status \"410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\": rpc error: code = NotFound desc = could not find container \"410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a\": container with ID starting with 410865fcd2969763b6210e186845d2ae32d1714d03a4758b1da803fe1dcf392a not found: ID does not exist" Dec 05 01:12:26 crc kubenswrapper[4990]: I1205 01:12:26.948143 4990 scope.go:117] "RemoveContainer" containerID="433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051" Dec 05 01:12:26 crc kubenswrapper[4990]: E1205 01:12:26.948515 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\": container with ID starting with 433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051 not found: ID does not exist" containerID="433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051" Dec 05 01:12:26 crc kubenswrapper[4990]: I1205 01:12:26.948584 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051"} err="failed to get container status \"433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\": rpc error: code = NotFound desc = could not find container \"433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051\": container with ID starting with 433af97162d6da091cd295a0b4ffc813a4d2f8aa231862f6a1d61c4091f4c051 not found: ID does not exist" Dec 05 01:12:26 crc kubenswrapper[4990]: I1205 01:12:26.948604 4990 scope.go:117] "RemoveContainer" containerID="17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d" Dec 05 01:12:26 crc kubenswrapper[4990]: E1205 01:12:26.949008 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\": container with ID starting with 17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d not found: ID does not exist" containerID="17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d" Dec 05 01:12:26 crc kubenswrapper[4990]: I1205 01:12:26.949061 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d"} err="failed to get container status \"17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\": rpc error: code = NotFound desc = could not find container \"17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d\": container with ID starting with 17828a9d8752089c62d1f7db1016ca44d3fe1774bc168fd88e2fa96de313791d not found: ID does not exist" Dec 05 01:12:27 crc kubenswrapper[4990]: E1205 01:12:27.248359 4990 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="800ms" Dec 05 01:12:27 crc kubenswrapper[4990]: I1205 01:12:27.943956 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 05 01:12:28 crc kubenswrapper[4990]: E1205 01:12:28.049559 4990 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="1.6s" Dec 05 01:12:29 crc kubenswrapper[4990]: E1205 01:12:29.650928 4990 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="3.2s" Dec 05 01:12:31 crc kubenswrapper[4990]: I1205 01:12:31.935344 4990 status_manager.go:851] "Failed to get status for pod" podUID="62756b9a-f2fe-4305-b031-087a5709d8dc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 05 01:12:32 crc kubenswrapper[4990]: E1205 01:12:32.852003 4990 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="6.4s" Dec 05 01:12:35 crc kubenswrapper[4990]: E1205 01:12:35.573860 4990 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.145:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187e2c9a000792ec openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-05 01:12:23.912895212 +0000 UTC m=+242.289110613,LastTimestamp:2025-12-05 01:12:23.912895212 +0000 UTC m=+242.289110613,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 05 01:12:36 crc kubenswrapper[4990]: I1205 01:12:36.841982 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 05 01:12:36 crc kubenswrapper[4990]: I1205 01:12:36.842078 4990 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="bc1a86cd848696e6b8d7296270d5d8b58ffca056fe95db283a4f494c87684ac2" exitCode=1 Dec 05 01:12:36 crc kubenswrapper[4990]: I1205 01:12:36.842132 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"bc1a86cd848696e6b8d7296270d5d8b58ffca056fe95db283a4f494c87684ac2"} Dec 05 01:12:36 crc kubenswrapper[4990]: I1205 01:12:36.843075 4990 scope.go:117] "RemoveContainer" containerID="bc1a86cd848696e6b8d7296270d5d8b58ffca056fe95db283a4f494c87684ac2" Dec 05 01:12:36 crc kubenswrapper[4990]: I1205 01:12:36.843471 4990 status_manager.go:851] "Failed to get status for pod" podUID="62756b9a-f2fe-4305-b031-087a5709d8dc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 05 01:12:36 crc kubenswrapper[4990]: I1205 01:12:36.844181 4990 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 05 01:12:36 crc kubenswrapper[4990]: I1205 01:12:36.929630 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:12:36 crc kubenswrapper[4990]: I1205 01:12:36.932208 4990 status_manager.go:851] "Failed to get status for pod" podUID="62756b9a-f2fe-4305-b031-087a5709d8dc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 05 01:12:36 crc kubenswrapper[4990]: I1205 01:12:36.932652 4990 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 05 01:12:36 crc kubenswrapper[4990]: I1205 01:12:36.959078 4990 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0aa465e0-0df4-4883-b893-6244a198c6c6" Dec 05 01:12:36 crc kubenswrapper[4990]: I1205 01:12:36.959135 4990 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0aa465e0-0df4-4883-b893-6244a198c6c6" Dec 05 01:12:36 crc kubenswrapper[4990]: E1205 01:12:36.959927 4990 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:12:36 crc kubenswrapper[4990]: I1205 01:12:36.960855 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:12:37 crc kubenswrapper[4990]: I1205 01:12:37.054403 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 01:12:37 crc kubenswrapper[4990]: I1205 01:12:37.852323 4990 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="6dbaf15daf32a0e4c6f3a5b34c8c03259665c77eb848d5cd8df658e8b4b3f8e4" exitCode=0 Dec 05 01:12:37 crc kubenswrapper[4990]: I1205 01:12:37.852426 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"6dbaf15daf32a0e4c6f3a5b34c8c03259665c77eb848d5cd8df658e8b4b3f8e4"} Dec 05 01:12:37 crc kubenswrapper[4990]: I1205 01:12:37.853000 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c663fd4441bdbf249984ea47f1936def12fcff942fedf956a9fd4d31ad0a3349"} Dec 05 01:12:37 crc kubenswrapper[4990]: I1205 01:12:37.853442 4990 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0aa465e0-0df4-4883-b893-6244a198c6c6" Dec 05 01:12:37 crc kubenswrapper[4990]: I1205 01:12:37.853519 4990 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0aa465e0-0df4-4883-b893-6244a198c6c6" Dec 05 01:12:37 crc kubenswrapper[4990]: I1205 01:12:37.854230 4990 status_manager.go:851] "Failed to get status for pod" podUID="62756b9a-f2fe-4305-b031-087a5709d8dc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 05 01:12:37 crc kubenswrapper[4990]: E1205 01:12:37.854251 4990 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:12:37 crc kubenswrapper[4990]: I1205 01:12:37.854793 4990 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 05 01:12:37 crc kubenswrapper[4990]: I1205 01:12:37.859229 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 05 01:12:37 crc kubenswrapper[4990]: I1205 01:12:37.859325 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"519583b5bd536a71665c308349c8c57040d6a2209f0f86e4aac9813a0e4d11c7"} Dec 05 01:12:37 crc kubenswrapper[4990]: I1205 01:12:37.860342 4990 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 05 01:12:37 crc kubenswrapper[4990]: I1205 01:12:37.860953 4990 status_manager.go:851] "Failed to get status for pod" podUID="62756b9a-f2fe-4305-b031-087a5709d8dc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 05 01:12:38 crc kubenswrapper[4990]: I1205 01:12:38.875553 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"feca7fa4735414299e04bf497bab5bb88638ae129e3e8c8b4096b421fca48cc7"} Dec 05 01:12:38 crc kubenswrapper[4990]: I1205 01:12:38.875607 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c2cd2e645b7d6c4ff0014713f71fd3988ecd62747a12377bea95c55bb416a419"} Dec 05 01:12:38 crc kubenswrapper[4990]: I1205 01:12:38.875625 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0b8c6b90601d962fb781ee59b80c783f1c207c682cbb6a0d94f58d6d8be40b2b"} Dec 05 01:12:39 crc kubenswrapper[4990]: I1205 01:12:39.883342 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ee7d8d7ccc47b5164f57d7797c600fbea5e050015160ef37bb66b0d407ccb966"} Dec 05 01:12:39 crc kubenswrapper[4990]: I1205 01:12:39.883685 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"67bc2cf54a91ff919d5d34b327466f8cf3acf3754810e10af7130bc1bac08a1f"} Dec 05 01:12:39 crc kubenswrapper[4990]: I1205 01:12:39.885101 4990 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0aa465e0-0df4-4883-b893-6244a198c6c6" Dec 05 01:12:39 crc kubenswrapper[4990]: I1205 01:12:39.885280 4990 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0aa465e0-0df4-4883-b893-6244a198c6c6" Dec 05 01:12:41 crc kubenswrapper[4990]: I1205 01:12:41.961065 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:12:41 crc kubenswrapper[4990]: I1205 01:12:41.961142 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:12:41 crc kubenswrapper[4990]: I1205 01:12:41.970432 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:12:44 crc kubenswrapper[4990]: I1205 01:12:44.377709 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 01:12:44 crc kubenswrapper[4990]: I1205 01:12:44.900765 4990 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:12:44 crc kubenswrapper[4990]: I1205 01:12:44.969421 4990 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6e83c2eb-cf4d-41bd-9b65-bd7a4ec9d17b" Dec 05 01:12:45 crc kubenswrapper[4990]: I1205 01:12:45.920727 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:12:45 crc kubenswrapper[4990]: I1205 01:12:45.921715 4990 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0aa465e0-0df4-4883-b893-6244a198c6c6" Dec 05 01:12:45 crc kubenswrapper[4990]: I1205 01:12:45.921745 4990 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0aa465e0-0df4-4883-b893-6244a198c6c6" Dec 05 01:12:45 crc kubenswrapper[4990]: I1205 01:12:45.925269 4990 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6e83c2eb-cf4d-41bd-9b65-bd7a4ec9d17b" Dec 05 01:12:46 crc kubenswrapper[4990]: I1205 01:12:46.927992 4990 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0aa465e0-0df4-4883-b893-6244a198c6c6" Dec 05 01:12:46 crc kubenswrapper[4990]: I1205 01:12:46.928370 4990 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0aa465e0-0df4-4883-b893-6244a198c6c6" Dec 05 01:12:46 crc kubenswrapper[4990]: I1205 01:12:46.932234 4990 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6e83c2eb-cf4d-41bd-9b65-bd7a4ec9d17b" Dec 05 01:12:46 crc kubenswrapper[4990]: I1205 01:12:46.933550 4990 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://0b8c6b90601d962fb781ee59b80c783f1c207c682cbb6a0d94f58d6d8be40b2b" Dec 05 01:12:46 crc kubenswrapper[4990]: I1205 01:12:46.933763 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:12:47 crc kubenswrapper[4990]: I1205 01:12:47.054962 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 01:12:47 crc kubenswrapper[4990]: I1205 01:12:47.062715 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 01:12:47 crc kubenswrapper[4990]: I1205 01:12:47.942331 4990 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0aa465e0-0df4-4883-b893-6244a198c6c6" Dec 05 01:12:47 crc kubenswrapper[4990]: I1205 01:12:47.942401 4990 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0aa465e0-0df4-4883-b893-6244a198c6c6" Dec 05 01:12:47 crc kubenswrapper[4990]: I1205 01:12:47.949198 4990 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6e83c2eb-cf4d-41bd-9b65-bd7a4ec9d17b" Dec 05 01:12:47 crc kubenswrapper[4990]: I1205 01:12:47.957099 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 01:12:54 crc kubenswrapper[4990]: I1205 01:12:54.747597 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 05 01:12:54 crc kubenswrapper[4990]: I1205 01:12:54.774558 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 05 01:12:55 crc kubenswrapper[4990]: I1205 01:12:55.289333 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 05 01:12:55 crc kubenswrapper[4990]: I1205 01:12:55.495778 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 05 01:12:55 crc kubenswrapper[4990]: I1205 01:12:55.828188 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 05 01:12:55 crc kubenswrapper[4990]: I1205 01:12:55.961135 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 05 01:12:55 crc kubenswrapper[4990]: I1205 01:12:55.987690 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 05 01:12:56 crc kubenswrapper[4990]: I1205 01:12:56.157593 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 05 01:12:56 crc kubenswrapper[4990]: I1205 01:12:56.467961 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 05 01:12:56 crc kubenswrapper[4990]: I1205 01:12:56.532869 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 05 01:12:56 crc kubenswrapper[4990]: I1205 01:12:56.613729 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 05 01:12:56 crc kubenswrapper[4990]: I1205 01:12:56.739122 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 05 01:12:56 crc kubenswrapper[4990]: I1205 01:12:56.885916 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 05 01:12:57 crc kubenswrapper[4990]: I1205 01:12:57.077614 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 05 01:12:57 crc kubenswrapper[4990]: I1205 01:12:57.108711 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 05 01:12:57 crc kubenswrapper[4990]: I1205 01:12:57.116276 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 05 01:12:57 crc kubenswrapper[4990]: I1205 01:12:57.139352 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 05 01:12:57 crc kubenswrapper[4990]: I1205 01:12:57.404593 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 05 01:12:57 crc kubenswrapper[4990]: I1205 01:12:57.450974 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 05 01:12:57 crc kubenswrapper[4990]: I1205 01:12:57.552310 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 05 01:12:57 crc kubenswrapper[4990]: I1205 01:12:57.598385 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 05 01:12:57 crc kubenswrapper[4990]: I1205 01:12:57.701983 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 05 01:12:57 crc kubenswrapper[4990]: I1205 01:12:57.767991 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 05 01:12:57 crc kubenswrapper[4990]: I1205 01:12:57.850936 4990 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 05 01:12:57 crc kubenswrapper[4990]: I1205 01:12:57.910856 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 05 01:12:58 crc kubenswrapper[4990]: I1205 01:12:58.002815 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 05 01:12:58 crc kubenswrapper[4990]: I1205 01:12:58.117937 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 05 01:12:58 crc kubenswrapper[4990]: I1205 01:12:58.147949 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 05 01:12:58 crc kubenswrapper[4990]: I1205 01:12:58.159008 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 05 01:12:58 crc kubenswrapper[4990]: I1205 01:12:58.343253 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 05 01:12:58 crc kubenswrapper[4990]: I1205 01:12:58.359047 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 05 01:12:58 crc kubenswrapper[4990]: I1205 01:12:58.372416 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 05 01:12:58 crc kubenswrapper[4990]: I1205 01:12:58.409274 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 05 01:12:58 crc kubenswrapper[4990]: I1205 01:12:58.414885 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 05 01:12:58 crc kubenswrapper[4990]: I1205 01:12:58.556061 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 05 01:12:58 crc kubenswrapper[4990]: I1205 01:12:58.566630 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 05 01:12:58 crc kubenswrapper[4990]: I1205 01:12:58.671431 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 05 01:12:58 crc kubenswrapper[4990]: I1205 01:12:58.793559 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 05 01:12:59 crc kubenswrapper[4990]: I1205 01:12:59.004092 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 05 01:12:59 crc kubenswrapper[4990]: I1205 01:12:59.054652 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 05 01:12:59 crc kubenswrapper[4990]: I1205 01:12:59.091097 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 05 01:12:59 crc kubenswrapper[4990]: I1205 01:12:59.167318 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 05 01:12:59 crc kubenswrapper[4990]: I1205 01:12:59.192188 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 05 01:12:59 crc kubenswrapper[4990]: I1205 01:12:59.226134 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 05 01:12:59 crc kubenswrapper[4990]: I1205 01:12:59.252977 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 05 01:12:59 crc kubenswrapper[4990]: I1205 01:12:59.338936 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 05 01:12:59 crc kubenswrapper[4990]: I1205 01:12:59.411375 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 05 01:12:59 crc kubenswrapper[4990]: I1205 01:12:59.516181 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 05 01:12:59 crc kubenswrapper[4990]: I1205 01:12:59.562675 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 05 01:12:59 crc kubenswrapper[4990]: I1205 01:12:59.580253 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 05 01:12:59 crc kubenswrapper[4990]: I1205 01:12:59.605042 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 05 01:12:59 crc kubenswrapper[4990]: I1205 01:12:59.627862 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 05 01:12:59 crc kubenswrapper[4990]: I1205 01:12:59.630584 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 05 01:12:59 crc kubenswrapper[4990]: I1205 01:12:59.633864 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 05 01:12:59 crc kubenswrapper[4990]: I1205 01:12:59.691114 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 05 01:12:59 crc kubenswrapper[4990]: I1205 01:12:59.723117 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 05 01:12:59 crc kubenswrapper[4990]: I1205 01:12:59.814581 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 05 01:12:59 crc kubenswrapper[4990]: I1205 01:12:59.834919 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 05 01:12:59 crc kubenswrapper[4990]: I1205 01:12:59.845631 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 05 01:12:59 crc kubenswrapper[4990]: I1205 01:12:59.848466 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 05 01:12:59 crc kubenswrapper[4990]: I1205 01:12:59.891842 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 05 01:12:59 crc kubenswrapper[4990]: I1205 01:12:59.917947 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 05 01:12:59 crc kubenswrapper[4990]: I1205 01:12:59.926550 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 05 01:13:00 crc kubenswrapper[4990]: I1205 01:13:00.176807 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 05 01:13:00 crc kubenswrapper[4990]: I1205 01:13:00.208432 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 05 01:13:00 crc kubenswrapper[4990]: I1205 01:13:00.337376 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 05 01:13:00 crc kubenswrapper[4990]: I1205 01:13:00.360603 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 05 01:13:00 crc kubenswrapper[4990]: I1205 01:13:00.404670 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 05 01:13:00 crc kubenswrapper[4990]: I1205 01:13:00.406797 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 05 01:13:00 crc kubenswrapper[4990]: I1205 01:13:00.482895 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 05 01:13:00 crc kubenswrapper[4990]: I1205 01:13:00.533554 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 05 01:13:00 crc kubenswrapper[4990]: I1205 01:13:00.570632 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 05 01:13:00 crc kubenswrapper[4990]: I1205 01:13:00.593716 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 05 01:13:00 crc kubenswrapper[4990]: I1205 01:13:00.738950 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 05 01:13:00 crc kubenswrapper[4990]: I1205 01:13:00.796017 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 05 01:13:00 crc kubenswrapper[4990]: I1205 01:13:00.821897 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 05 01:13:00 crc kubenswrapper[4990]: I1205 01:13:00.863321 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 05 01:13:00 crc kubenswrapper[4990]: I1205 01:13:00.955032 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 05 01:13:00 crc kubenswrapper[4990]: I1205 01:13:00.963983 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 05 01:13:01 crc kubenswrapper[4990]: I1205 01:13:01.107039 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 05 01:13:01 crc kubenswrapper[4990]: I1205 01:13:01.141169 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 05 01:13:01 crc kubenswrapper[4990]: I1205 01:13:01.144895 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 05 01:13:01 crc kubenswrapper[4990]: I1205 01:13:01.151922 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 05 01:13:01 crc kubenswrapper[4990]: I1205 01:13:01.230938 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 05 01:13:01 crc kubenswrapper[4990]: I1205 01:13:01.255818 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 05 01:13:01 crc kubenswrapper[4990]: I1205 01:13:01.281651 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 05 01:13:01 crc kubenswrapper[4990]: I1205 01:13:01.384293 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 05 01:13:01 crc kubenswrapper[4990]: I1205 01:13:01.409655 4990 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 05 01:13:01 crc kubenswrapper[4990]: I1205 01:13:01.454765 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 05 01:13:01 crc kubenswrapper[4990]: I1205 01:13:01.488229 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 05 01:13:01 crc kubenswrapper[4990]: I1205 01:13:01.489385 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 05 01:13:01 crc kubenswrapper[4990]: I1205 01:13:01.499843 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 05 01:13:01 crc kubenswrapper[4990]: I1205 01:13:01.745362 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 05 01:13:01 crc kubenswrapper[4990]: I1205 01:13:01.749682 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 05 01:13:01 crc kubenswrapper[4990]: I1205 01:13:01.752462 4990 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 05 01:13:01 crc kubenswrapper[4990]: I1205 01:13:01.763856 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 05 01:13:01 crc kubenswrapper[4990]: I1205 01:13:01.764682 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 05 01:13:01 crc kubenswrapper[4990]: I1205 01:13:01.764754 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 05 01:13:01 crc kubenswrapper[4990]: I1205 01:13:01.771267 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 01:13:01 crc kubenswrapper[4990]: I1205 01:13:01.792120 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=17.792101997 podStartE2EDuration="17.792101997s" podCreationTimestamp="2025-12-05 01:12:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:13:01.787714361 +0000 UTC m=+280.163929762" watchObservedRunningTime="2025-12-05 01:13:01.792101997 +0000 UTC m=+280.168317358" Dec 05 01:13:01 crc kubenswrapper[4990]: I1205 01:13:01.803765 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 05 01:13:01 crc kubenswrapper[4990]: I1205 01:13:01.996760 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 05 01:13:02 crc kubenswrapper[4990]: I1205 01:13:02.001355 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 05 01:13:02 crc kubenswrapper[4990]: I1205 01:13:02.004038 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 05 01:13:02 crc kubenswrapper[4990]: I1205 01:13:02.037110 4990 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 05 01:13:02 crc kubenswrapper[4990]: I1205 01:13:02.054721 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 05 01:13:02 crc kubenswrapper[4990]: I1205 01:13:02.105902 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 05 01:13:02 crc kubenswrapper[4990]: I1205 01:13:02.214141 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 05 01:13:02 crc kubenswrapper[4990]: I1205 01:13:02.253821 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 05 01:13:02 crc kubenswrapper[4990]: I1205 01:13:02.255105 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 05 01:13:02 crc kubenswrapper[4990]: I1205 01:13:02.357015 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 05 01:13:02 crc kubenswrapper[4990]: I1205 01:13:02.425286 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 05 01:13:02 crc kubenswrapper[4990]: I1205 01:13:02.446429 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 05 01:13:02 crc kubenswrapper[4990]: I1205 01:13:02.546713 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 05 01:13:02 crc kubenswrapper[4990]: I1205 01:13:02.827332 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 05 01:13:02 crc kubenswrapper[4990]: I1205 01:13:02.874139 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 05 01:13:02 crc kubenswrapper[4990]: I1205 01:13:02.925551 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 05 01:13:03 crc kubenswrapper[4990]: I1205 01:13:03.171772 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 05 01:13:03 crc kubenswrapper[4990]: I1205 01:13:03.174575 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 05 01:13:03 crc kubenswrapper[4990]: I1205 01:13:03.175922 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 05 01:13:03 crc kubenswrapper[4990]: I1205 01:13:03.219147 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 05 01:13:03 crc kubenswrapper[4990]: I1205 01:13:03.521174 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 05 01:13:03 crc kubenswrapper[4990]: I1205 01:13:03.578704 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 05 01:13:03 crc kubenswrapper[4990]: I1205 01:13:03.589113 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 05 01:13:03 crc kubenswrapper[4990]: I1205 01:13:03.594885 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 05 01:13:03 crc kubenswrapper[4990]: I1205 01:13:03.688358 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 05 01:13:03 crc kubenswrapper[4990]: I1205 01:13:03.718938 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 05 01:13:03 crc kubenswrapper[4990]: I1205 01:13:03.754527 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 05 01:13:03 crc kubenswrapper[4990]: I1205 01:13:03.799312 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 05 01:13:03 crc kubenswrapper[4990]: I1205 01:13:03.844615 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 05 01:13:03 crc kubenswrapper[4990]: I1205 01:13:03.866800 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 05 01:13:03 crc kubenswrapper[4990]: I1205 01:13:03.897443 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 05 01:13:03 crc kubenswrapper[4990]: I1205 01:13:03.900714 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 05 01:13:03 crc kubenswrapper[4990]: I1205 01:13:03.913096 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 05 01:13:03 crc kubenswrapper[4990]: I1205 01:13:03.939548 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 05 01:13:03 crc kubenswrapper[4990]: I1205 01:13:03.941940 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 05 01:13:03 crc kubenswrapper[4990]: I1205 01:13:03.988201 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 05 01:13:04 crc kubenswrapper[4990]: I1205 01:13:04.025343 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 05 01:13:04 crc kubenswrapper[4990]: I1205 01:13:04.210688 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 05 01:13:04 crc kubenswrapper[4990]: I1205 01:13:04.239900 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 05 01:13:04 crc kubenswrapper[4990]: I1205 01:13:04.241313 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 05 01:13:04 crc kubenswrapper[4990]: I1205 01:13:04.260184 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 05 01:13:04 crc kubenswrapper[4990]: I1205 01:13:04.269570 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 05 01:13:04 crc kubenswrapper[4990]: I1205 01:13:04.318752 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 05 01:13:04 crc kubenswrapper[4990]: I1205 01:13:04.331186 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 05 01:13:04 crc kubenswrapper[4990]: I1205 01:13:04.349081 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 05 01:13:04 crc kubenswrapper[4990]: I1205 01:13:04.413663 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 05 01:13:04 crc kubenswrapper[4990]: I1205 01:13:04.437578 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 05 01:13:04 crc kubenswrapper[4990]: I1205 01:13:04.463157 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 05 01:13:04 crc kubenswrapper[4990]: I1205 01:13:04.528598 4990 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 05 01:13:04 crc kubenswrapper[4990]: I1205 01:13:04.535057 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 05 01:13:04 crc kubenswrapper[4990]: I1205 01:13:04.568812 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 05 01:13:04 crc kubenswrapper[4990]: I1205 01:13:04.611826 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 05 01:13:04 crc kubenswrapper[4990]: I1205 01:13:04.646841 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 05 01:13:04 crc kubenswrapper[4990]: I1205 01:13:04.657376 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 05 01:13:04 crc kubenswrapper[4990]: I1205 01:13:04.882415 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 05 01:13:04 crc kubenswrapper[4990]: I1205 01:13:04.995558 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 05 01:13:05 crc kubenswrapper[4990]: I1205 01:13:05.000627 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 05 01:13:05 crc kubenswrapper[4990]: I1205 01:13:05.076592 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 05 01:13:05 crc kubenswrapper[4990]: I1205 01:13:05.097062 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 05 01:13:05 crc kubenswrapper[4990]: I1205 01:13:05.138985 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 05 01:13:05 crc kubenswrapper[4990]: I1205 01:13:05.260634 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 05 01:13:05 crc kubenswrapper[4990]: I1205 01:13:05.283216 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 05 01:13:05 crc kubenswrapper[4990]: I1205 01:13:05.349058 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 05 01:13:05 crc kubenswrapper[4990]: I1205 01:13:05.366546 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 05 01:13:05 crc kubenswrapper[4990]: I1205 01:13:05.389472 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 05 01:13:05 crc kubenswrapper[4990]: I1205 01:13:05.473743 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 05 01:13:05 crc kubenswrapper[4990]: I1205 01:13:05.547230 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 05 01:13:05 crc kubenswrapper[4990]: I1205 01:13:05.571588 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 05 01:13:05 crc kubenswrapper[4990]: I1205 01:13:05.572350 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 05 01:13:05 crc kubenswrapper[4990]: I1205 01:13:05.698982 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 05 01:13:05 crc kubenswrapper[4990]: I1205 01:13:05.716790 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 05 01:13:05 crc kubenswrapper[4990]: I1205 01:13:05.736730 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 05 01:13:05 crc kubenswrapper[4990]: I1205 01:13:05.776552 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 05 01:13:05 crc kubenswrapper[4990]: I1205 01:13:05.788949 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 05 01:13:05 crc kubenswrapper[4990]: I1205 01:13:05.796384 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 05 01:13:05 crc kubenswrapper[4990]: I1205 01:13:05.838068 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 05 01:13:05 crc kubenswrapper[4990]: I1205 01:13:05.848504 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 05 01:13:05 crc kubenswrapper[4990]: I1205 01:13:05.881261 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 05 01:13:05 crc kubenswrapper[4990]: I1205 01:13:05.923625 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 05 01:13:06 crc kubenswrapper[4990]: I1205 01:13:06.061887 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 05 01:13:06 crc kubenswrapper[4990]: I1205 01:13:06.148602 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 05 01:13:06 crc kubenswrapper[4990]: I1205 01:13:06.207368 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 05 01:13:06 crc kubenswrapper[4990]: I1205 01:13:06.235157 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 05 01:13:06 crc kubenswrapper[4990]: I1205 01:13:06.324136 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 05 01:13:06 crc kubenswrapper[4990]: I1205 01:13:06.342334 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 05 01:13:06 crc kubenswrapper[4990]: I1205 01:13:06.510507 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 05 01:13:06 crc kubenswrapper[4990]: I1205 01:13:06.529512 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 05 01:13:06 crc kubenswrapper[4990]: I1205 01:13:06.584328 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 05 01:13:06 crc kubenswrapper[4990]: I1205 01:13:06.589785 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 05 01:13:06 crc kubenswrapper[4990]: I1205 01:13:06.649876 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 05 01:13:06 crc kubenswrapper[4990]: I1205 01:13:06.672382 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 05 01:13:06 crc kubenswrapper[4990]: I1205 01:13:06.673343 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 05 01:13:06 crc kubenswrapper[4990]: I1205 01:13:06.676937 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 05 01:13:06 crc kubenswrapper[4990]: I1205 01:13:06.746955 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 05 01:13:06 crc kubenswrapper[4990]: I1205 01:13:06.761900 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 05 01:13:06 crc kubenswrapper[4990]: I1205 01:13:06.844171 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 05 01:13:06 crc kubenswrapper[4990]: I1205 01:13:06.881565 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 05 01:13:06 crc kubenswrapper[4990]: I1205 01:13:06.903860 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 05 01:13:07 crc kubenswrapper[4990]: I1205 01:13:07.046999 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 05 01:13:07 crc kubenswrapper[4990]: I1205 01:13:07.171019 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 05 01:13:07 crc kubenswrapper[4990]: I1205 01:13:07.175293 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 05 01:13:07 crc kubenswrapper[4990]: I1205 01:13:07.184932 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 05 01:13:07 crc kubenswrapper[4990]: I1205 01:13:07.190465 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 05 01:13:07 crc kubenswrapper[4990]: I1205 01:13:07.224816 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 05 01:13:07 crc kubenswrapper[4990]: I1205 01:13:07.396776 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 05 01:13:07 crc kubenswrapper[4990]: I1205 01:13:07.570768 4990 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 05 01:13:07 crc kubenswrapper[4990]: I1205 01:13:07.571122 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://989ea35522a42ed149fd8ebc196e11317d5acf37dbf2feff31db5ee3baa0e1bd" gracePeriod=5 Dec 05 01:13:07 crc kubenswrapper[4990]: I1205 01:13:07.576435 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 05 01:13:07 crc kubenswrapper[4990]: I1205 01:13:07.662150 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 05 01:13:07 crc kubenswrapper[4990]: I1205 01:13:07.674864 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 05 01:13:07 crc kubenswrapper[4990]: I1205 01:13:07.803727 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 05 01:13:07 crc kubenswrapper[4990]: I1205 01:13:07.819328 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 05 01:13:07 crc kubenswrapper[4990]: I1205 01:13:07.838453 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 05 01:13:07 crc kubenswrapper[4990]: I1205 01:13:07.843789 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 05 01:13:07 crc kubenswrapper[4990]: I1205 01:13:07.845167 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 05 01:13:08 crc kubenswrapper[4990]: I1205 01:13:08.017117 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 05 01:13:08 crc kubenswrapper[4990]: I1205 01:13:08.131714 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 05 01:13:08 crc kubenswrapper[4990]: I1205 01:13:08.312443 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 05 01:13:08 crc kubenswrapper[4990]: I1205 01:13:08.499169 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 05 01:13:08 crc kubenswrapper[4990]: I1205 01:13:08.559222 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 05 01:13:08 crc kubenswrapper[4990]: I1205 01:13:08.565369 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 05 01:13:08 crc kubenswrapper[4990]: I1205 01:13:08.677221 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 05 01:13:08 crc kubenswrapper[4990]: I1205 01:13:08.707757 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 05 01:13:08 crc kubenswrapper[4990]: I1205 01:13:08.764329 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 05 01:13:08 crc kubenswrapper[4990]: I1205 01:13:08.819052 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 05 01:13:08 crc kubenswrapper[4990]: I1205 01:13:08.821068 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 05 01:13:08 crc kubenswrapper[4990]: I1205 01:13:08.857580 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 05 01:13:08 crc kubenswrapper[4990]: I1205 01:13:08.873252 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 05 01:13:08 crc kubenswrapper[4990]: I1205 01:13:08.999888 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 05 01:13:09 crc kubenswrapper[4990]: I1205 01:13:09.000918 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 05 01:13:09 crc kubenswrapper[4990]: I1205 01:13:09.068719 4990 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 05 01:13:09 crc kubenswrapper[4990]: I1205 01:13:09.275846 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 05 01:13:09 crc kubenswrapper[4990]: I1205 01:13:09.322055 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 05 01:13:09 crc kubenswrapper[4990]: I1205 01:13:09.486770 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 05 01:13:09 crc kubenswrapper[4990]: I1205 01:13:09.505696 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 05 01:13:09 crc kubenswrapper[4990]: I1205 01:13:09.556675 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 05 01:13:09 crc kubenswrapper[4990]: I1205 01:13:09.565810 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 05 01:13:09 crc kubenswrapper[4990]: I1205 01:13:09.605350 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 05 01:13:09 crc kubenswrapper[4990]: I1205 01:13:09.745336 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 05 01:13:09 crc kubenswrapper[4990]: I1205 01:13:09.789912 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 05 01:13:09 crc kubenswrapper[4990]: I1205 01:13:09.940106 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 05 01:13:10 crc kubenswrapper[4990]: I1205 01:13:10.012027 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 05 01:13:10 crc kubenswrapper[4990]: I1205 01:13:10.030434 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 05 01:13:10 crc kubenswrapper[4990]: I1205 01:13:10.050311 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 05 01:13:10 crc kubenswrapper[4990]: I1205 01:13:10.102009 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 05 01:13:10 crc kubenswrapper[4990]: I1205 01:13:10.184659 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 05 01:13:10 crc kubenswrapper[4990]: I1205 01:13:10.273013 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 05 01:13:10 crc kubenswrapper[4990]: I1205 01:13:10.499957 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 05 01:13:10 crc kubenswrapper[4990]: I1205 01:13:10.566528 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 05 01:13:10 crc kubenswrapper[4990]: I1205 01:13:10.687837 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 05 01:13:10 crc kubenswrapper[4990]: I1205 01:13:10.845811 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 05 01:13:10 crc kubenswrapper[4990]: I1205 01:13:10.870212 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 05 01:13:11 crc kubenswrapper[4990]: I1205 01:13:11.317041 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 05 01:13:11 crc kubenswrapper[4990]: I1205 01:13:11.565105 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 05 01:13:11 crc kubenswrapper[4990]: I1205 01:13:11.828770 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 05 01:13:12 crc kubenswrapper[4990]: I1205 01:13:12.410715 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 05 01:13:12 crc kubenswrapper[4990]: I1205 01:13:12.706534 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 05 01:13:13 crc kubenswrapper[4990]: I1205 01:13:13.106427 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 05 01:13:13 crc kubenswrapper[4990]: I1205 01:13:13.106834 4990 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="989ea35522a42ed149fd8ebc196e11317d5acf37dbf2feff31db5ee3baa0e1bd" exitCode=137 Dec 05 01:13:13 crc kubenswrapper[4990]: I1205 01:13:13.160206 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 05 01:13:13 crc kubenswrapper[4990]: I1205 01:13:13.160308 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 01:13:13 crc kubenswrapper[4990]: I1205 01:13:13.224963 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 05 01:13:13 crc kubenswrapper[4990]: I1205 01:13:13.225008 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 05 01:13:13 crc kubenswrapper[4990]: I1205 01:13:13.225039 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 05 01:13:13 crc kubenswrapper[4990]: I1205 01:13:13.225077 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 05 01:13:13 crc kubenswrapper[4990]: I1205 01:13:13.225109 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 05 01:13:13 crc kubenswrapper[4990]: I1205 01:13:13.225175 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:13:13 crc kubenswrapper[4990]: I1205 01:13:13.225200 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:13:13 crc kubenswrapper[4990]: I1205 01:13:13.225210 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:13:13 crc kubenswrapper[4990]: I1205 01:13:13.225238 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:13:13 crc kubenswrapper[4990]: I1205 01:13:13.225655 4990 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:13 crc kubenswrapper[4990]: I1205 01:13:13.225707 4990 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:13 crc kubenswrapper[4990]: I1205 01:13:13.225733 4990 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:13 crc kubenswrapper[4990]: I1205 01:13:13.225755 4990 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:13 crc kubenswrapper[4990]: I1205 01:13:13.232529 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:13:13 crc kubenswrapper[4990]: I1205 01:13:13.326402 4990 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:13 crc kubenswrapper[4990]: I1205 01:13:13.944961 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 05 01:13:14 crc kubenswrapper[4990]: I1205 01:13:14.114393 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 05 01:13:14 crc kubenswrapper[4990]: I1205 01:13:14.114475 4990 scope.go:117] "RemoveContainer" containerID="989ea35522a42ed149fd8ebc196e11317d5acf37dbf2feff31db5ee3baa0e1bd" Dec 05 01:13:14 crc kubenswrapper[4990]: I1205 01:13:14.114545 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 01:13:38 crc kubenswrapper[4990]: I1205 01:13:38.263888 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2wzr5"] Dec 05 01:13:38 crc kubenswrapper[4990]: I1205 01:13:38.266099 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-2wzr5" podUID="73009b29-5f92-4552-969c-669c459575ae" containerName="controller-manager" containerID="cri-o://192d662c9f7212ee7f8242a136ee535aa551f8c2fd51d49d70e5bcc49778d1fe" gracePeriod=30 Dec 05 01:13:38 crc kubenswrapper[4990]: I1205 01:13:38.267989 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wqk2q"] Dec 05 01:13:38 crc kubenswrapper[4990]: I1205 01:13:38.268162 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wqk2q" podUID="bbf181f5-152c-4424-9206-9b2981b901ac" containerName="route-controller-manager" containerID="cri-o://f246c3f5ad0bb902e2b86a905ce4d6c49020e27514ffae59d13c20d17cfafa75" gracePeriod=30 Dec 05 01:13:38 crc kubenswrapper[4990]: I1205 01:13:38.668436 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2wzr5" Dec 05 01:13:38 crc kubenswrapper[4990]: I1205 01:13:38.675668 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wqk2q" Dec 05 01:13:38 crc kubenswrapper[4990]: I1205 01:13:38.852808 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbf181f5-152c-4424-9206-9b2981b901ac-config\") pod \"bbf181f5-152c-4424-9206-9b2981b901ac\" (UID: \"bbf181f5-152c-4424-9206-9b2981b901ac\") " Dec 05 01:13:38 crc kubenswrapper[4990]: I1205 01:13:38.852905 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73009b29-5f92-4552-969c-669c459575ae-serving-cert\") pod \"73009b29-5f92-4552-969c-669c459575ae\" (UID: \"73009b29-5f92-4552-969c-669c459575ae\") " Dec 05 01:13:38 crc kubenswrapper[4990]: I1205 01:13:38.852982 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr47c\" (UniqueName: \"kubernetes.io/projected/73009b29-5f92-4552-969c-669c459575ae-kube-api-access-jr47c\") pod \"73009b29-5f92-4552-969c-669c459575ae\" (UID: \"73009b29-5f92-4552-969c-669c459575ae\") " Dec 05 01:13:38 crc kubenswrapper[4990]: I1205 01:13:38.853014 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/73009b29-5f92-4552-969c-669c459575ae-proxy-ca-bundles\") pod \"73009b29-5f92-4552-969c-669c459575ae\" (UID: \"73009b29-5f92-4552-969c-669c459575ae\") " Dec 05 01:13:38 crc kubenswrapper[4990]: I1205 01:13:38.853054 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/73009b29-5f92-4552-969c-669c459575ae-client-ca\") pod \"73009b29-5f92-4552-969c-669c459575ae\" (UID: \"73009b29-5f92-4552-969c-669c459575ae\") " Dec 05 01:13:38 crc kubenswrapper[4990]: I1205 01:13:38.853100 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbf181f5-152c-4424-9206-9b2981b901ac-serving-cert\") pod \"bbf181f5-152c-4424-9206-9b2981b901ac\" (UID: \"bbf181f5-152c-4424-9206-9b2981b901ac\") " Dec 05 01:13:38 crc kubenswrapper[4990]: I1205 01:13:38.853134 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bbf181f5-152c-4424-9206-9b2981b901ac-client-ca\") pod \"bbf181f5-152c-4424-9206-9b2981b901ac\" (UID: \"bbf181f5-152c-4424-9206-9b2981b901ac\") " Dec 05 01:13:38 crc kubenswrapper[4990]: I1205 01:13:38.853186 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73009b29-5f92-4552-969c-669c459575ae-config\") pod \"73009b29-5f92-4552-969c-669c459575ae\" (UID: \"73009b29-5f92-4552-969c-669c459575ae\") " Dec 05 01:13:38 crc kubenswrapper[4990]: I1205 01:13:38.853216 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpk6k\" (UniqueName: \"kubernetes.io/projected/bbf181f5-152c-4424-9206-9b2981b901ac-kube-api-access-bpk6k\") pod \"bbf181f5-152c-4424-9206-9b2981b901ac\" (UID: \"bbf181f5-152c-4424-9206-9b2981b901ac\") " Dec 05 01:13:38 crc kubenswrapper[4990]: I1205 01:13:38.854458 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73009b29-5f92-4552-969c-669c459575ae-client-ca" (OuterVolumeSpecName: "client-ca") pod "73009b29-5f92-4552-969c-669c459575ae" (UID: "73009b29-5f92-4552-969c-669c459575ae"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:13:38 crc kubenswrapper[4990]: I1205 01:13:38.854543 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73009b29-5f92-4552-969c-669c459575ae-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "73009b29-5f92-4552-969c-669c459575ae" (UID: "73009b29-5f92-4552-969c-669c459575ae"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:13:38 crc kubenswrapper[4990]: I1205 01:13:38.854585 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73009b29-5f92-4552-969c-669c459575ae-config" (OuterVolumeSpecName: "config") pod "73009b29-5f92-4552-969c-669c459575ae" (UID: "73009b29-5f92-4552-969c-669c459575ae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:13:38 crc kubenswrapper[4990]: I1205 01:13:38.854975 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbf181f5-152c-4424-9206-9b2981b901ac-client-ca" (OuterVolumeSpecName: "client-ca") pod "bbf181f5-152c-4424-9206-9b2981b901ac" (UID: "bbf181f5-152c-4424-9206-9b2981b901ac"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:13:38 crc kubenswrapper[4990]: I1205 01:13:38.855236 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbf181f5-152c-4424-9206-9b2981b901ac-config" (OuterVolumeSpecName: "config") pod "bbf181f5-152c-4424-9206-9b2981b901ac" (UID: "bbf181f5-152c-4424-9206-9b2981b901ac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:13:38 crc kubenswrapper[4990]: I1205 01:13:38.860076 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbf181f5-152c-4424-9206-9b2981b901ac-kube-api-access-bpk6k" (OuterVolumeSpecName: "kube-api-access-bpk6k") pod "bbf181f5-152c-4424-9206-9b2981b901ac" (UID: "bbf181f5-152c-4424-9206-9b2981b901ac"). InnerVolumeSpecName "kube-api-access-bpk6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:13:38 crc kubenswrapper[4990]: I1205 01:13:38.860191 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73009b29-5f92-4552-969c-669c459575ae-kube-api-access-jr47c" (OuterVolumeSpecName: "kube-api-access-jr47c") pod "73009b29-5f92-4552-969c-669c459575ae" (UID: "73009b29-5f92-4552-969c-669c459575ae"). InnerVolumeSpecName "kube-api-access-jr47c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:13:38 crc kubenswrapper[4990]: I1205 01:13:38.861315 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73009b29-5f92-4552-969c-669c459575ae-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "73009b29-5f92-4552-969c-669c459575ae" (UID: "73009b29-5f92-4552-969c-669c459575ae"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:13:38 crc kubenswrapper[4990]: I1205 01:13:38.861593 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbf181f5-152c-4424-9206-9b2981b901ac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bbf181f5-152c-4424-9206-9b2981b901ac" (UID: "bbf181f5-152c-4424-9206-9b2981b901ac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:13:38 crc kubenswrapper[4990]: I1205 01:13:38.954928 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73009b29-5f92-4552-969c-669c459575ae-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:38 crc kubenswrapper[4990]: I1205 01:13:38.954979 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpk6k\" (UniqueName: \"kubernetes.io/projected/bbf181f5-152c-4424-9206-9b2981b901ac-kube-api-access-bpk6k\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:38 crc kubenswrapper[4990]: I1205 01:13:38.955002 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbf181f5-152c-4424-9206-9b2981b901ac-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:38 crc kubenswrapper[4990]: I1205 01:13:38.955022 4990 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73009b29-5f92-4552-969c-669c459575ae-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:38 crc kubenswrapper[4990]: I1205 01:13:38.955039 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jr47c\" (UniqueName: \"kubernetes.io/projected/73009b29-5f92-4552-969c-669c459575ae-kube-api-access-jr47c\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:38 crc kubenswrapper[4990]: I1205 01:13:38.955057 4990 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/73009b29-5f92-4552-969c-669c459575ae-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:38 crc kubenswrapper[4990]: I1205 01:13:38.955073 4990 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/73009b29-5f92-4552-969c-669c459575ae-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:38 crc kubenswrapper[4990]: I1205 01:13:38.955089 4990 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbf181f5-152c-4424-9206-9b2981b901ac-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:38 crc kubenswrapper[4990]: I1205 01:13:38.955105 4990 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bbf181f5-152c-4424-9206-9b2981b901ac-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.188006 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-866bd7f759-hrncf"] Dec 05 01:13:39 crc kubenswrapper[4990]: E1205 01:13:39.188252 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62756b9a-f2fe-4305-b031-087a5709d8dc" containerName="installer" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.188268 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="62756b9a-f2fe-4305-b031-087a5709d8dc" containerName="installer" Dec 05 01:13:39 crc kubenswrapper[4990]: E1205 01:13:39.188290 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73009b29-5f92-4552-969c-669c459575ae" containerName="controller-manager" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.188299 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="73009b29-5f92-4552-969c-669c459575ae" containerName="controller-manager" Dec 05 01:13:39 crc kubenswrapper[4990]: E1205 01:13:39.188316 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.188325 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 05 01:13:39 crc kubenswrapper[4990]: E1205 01:13:39.188338 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbf181f5-152c-4424-9206-9b2981b901ac" containerName="route-controller-manager" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.188348 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbf181f5-152c-4424-9206-9b2981b901ac" containerName="route-controller-manager" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.188472 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.188515 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbf181f5-152c-4424-9206-9b2981b901ac" containerName="route-controller-manager" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.188536 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="62756b9a-f2fe-4305-b031-087a5709d8dc" containerName="installer" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.188548 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="73009b29-5f92-4552-969c-669c459575ae" containerName="controller-manager" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.189064 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-866bd7f759-hrncf" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.209353 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-866bd7f759-hrncf"] Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.242158 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dbf5d54d9-2z7qg"] Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.243551 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dbf5d54d9-2z7qg" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.248363 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dbf5d54d9-2z7qg"] Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.270660 4990 generic.go:334] "Generic (PLEG): container finished" podID="73009b29-5f92-4552-969c-669c459575ae" containerID="192d662c9f7212ee7f8242a136ee535aa551f8c2fd51d49d70e5bcc49778d1fe" exitCode=0 Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.270727 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2wzr5" event={"ID":"73009b29-5f92-4552-969c-669c459575ae","Type":"ContainerDied","Data":"192d662c9f7212ee7f8242a136ee535aa551f8c2fd51d49d70e5bcc49778d1fe"} Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.270755 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2wzr5" event={"ID":"73009b29-5f92-4552-969c-669c459575ae","Type":"ContainerDied","Data":"fa9f3b2922fb8d06ccb1ecfeca5d99ccee37bdf0e0021b01b7ed7704b42db4d2"} Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.270773 4990 scope.go:117] "RemoveContainer" containerID="192d662c9f7212ee7f8242a136ee535aa551f8c2fd51d49d70e5bcc49778d1fe" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.270866 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2wzr5" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.273420 4990 generic.go:334] "Generic (PLEG): container finished" podID="bbf181f5-152c-4424-9206-9b2981b901ac" containerID="f246c3f5ad0bb902e2b86a905ce4d6c49020e27514ffae59d13c20d17cfafa75" exitCode=0 Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.273474 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wqk2q" event={"ID":"bbf181f5-152c-4424-9206-9b2981b901ac","Type":"ContainerDied","Data":"f246c3f5ad0bb902e2b86a905ce4d6c49020e27514ffae59d13c20d17cfafa75"} Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.273527 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wqk2q" event={"ID":"bbf181f5-152c-4424-9206-9b2981b901ac","Type":"ContainerDied","Data":"534d6d4686068a9089890bdf5f807271f5508cc68241c76e4abd6fc1fc9767b1"} Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.273541 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wqk2q" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.288026 4990 scope.go:117] "RemoveContainer" containerID="192d662c9f7212ee7f8242a136ee535aa551f8c2fd51d49d70e5bcc49778d1fe" Dec 05 01:13:39 crc kubenswrapper[4990]: E1205 01:13:39.288549 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"192d662c9f7212ee7f8242a136ee535aa551f8c2fd51d49d70e5bcc49778d1fe\": container with ID starting with 192d662c9f7212ee7f8242a136ee535aa551f8c2fd51d49d70e5bcc49778d1fe not found: ID does not exist" containerID="192d662c9f7212ee7f8242a136ee535aa551f8c2fd51d49d70e5bcc49778d1fe" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.288572 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"192d662c9f7212ee7f8242a136ee535aa551f8c2fd51d49d70e5bcc49778d1fe"} err="failed to get container status \"192d662c9f7212ee7f8242a136ee535aa551f8c2fd51d49d70e5bcc49778d1fe\": rpc error: code = NotFound desc = could not find container \"192d662c9f7212ee7f8242a136ee535aa551f8c2fd51d49d70e5bcc49778d1fe\": container with ID starting with 192d662c9f7212ee7f8242a136ee535aa551f8c2fd51d49d70e5bcc49778d1fe not found: ID does not exist" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.288590 4990 scope.go:117] "RemoveContainer" containerID="f246c3f5ad0bb902e2b86a905ce4d6c49020e27514ffae59d13c20d17cfafa75" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.299574 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2wzr5"] Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.308773 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2wzr5"] Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.309574 4990 scope.go:117] "RemoveContainer" containerID="f246c3f5ad0bb902e2b86a905ce4d6c49020e27514ffae59d13c20d17cfafa75" Dec 05 01:13:39 crc kubenswrapper[4990]: E1205 01:13:39.310530 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f246c3f5ad0bb902e2b86a905ce4d6c49020e27514ffae59d13c20d17cfafa75\": container with ID starting with f246c3f5ad0bb902e2b86a905ce4d6c49020e27514ffae59d13c20d17cfafa75 not found: ID does not exist" containerID="f246c3f5ad0bb902e2b86a905ce4d6c49020e27514ffae59d13c20d17cfafa75" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.310622 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f246c3f5ad0bb902e2b86a905ce4d6c49020e27514ffae59d13c20d17cfafa75"} err="failed to get container status \"f246c3f5ad0bb902e2b86a905ce4d6c49020e27514ffae59d13c20d17cfafa75\": rpc error: code = NotFound desc = could not find container \"f246c3f5ad0bb902e2b86a905ce4d6c49020e27514ffae59d13c20d17cfafa75\": container with ID starting with f246c3f5ad0bb902e2b86a905ce4d6c49020e27514ffae59d13c20d17cfafa75 not found: ID does not exist" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.313473 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wqk2q"] Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.317580 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wqk2q"] Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.360216 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aca2b5df-7388-4393-86c3-65c19e0d6890-serving-cert\") pod \"controller-manager-866bd7f759-hrncf\" (UID: \"aca2b5df-7388-4393-86c3-65c19e0d6890\") " pod="openshift-controller-manager/controller-manager-866bd7f759-hrncf" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.360314 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aca2b5df-7388-4393-86c3-65c19e0d6890-proxy-ca-bundles\") pod \"controller-manager-866bd7f759-hrncf\" (UID: \"aca2b5df-7388-4393-86c3-65c19e0d6890\") " pod="openshift-controller-manager/controller-manager-866bd7f759-hrncf" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.360367 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aca2b5df-7388-4393-86c3-65c19e0d6890-config\") pod \"controller-manager-866bd7f759-hrncf\" (UID: \"aca2b5df-7388-4393-86c3-65c19e0d6890\") " pod="openshift-controller-manager/controller-manager-866bd7f759-hrncf" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.360409 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54a4d839-4622-4876-8c21-42cf2a2bf6d4-serving-cert\") pod \"route-controller-manager-dbf5d54d9-2z7qg\" (UID: \"54a4d839-4622-4876-8c21-42cf2a2bf6d4\") " pod="openshift-route-controller-manager/route-controller-manager-dbf5d54d9-2z7qg" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.360455 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54a4d839-4622-4876-8c21-42cf2a2bf6d4-client-ca\") pod \"route-controller-manager-dbf5d54d9-2z7qg\" (UID: \"54a4d839-4622-4876-8c21-42cf2a2bf6d4\") " pod="openshift-route-controller-manager/route-controller-manager-dbf5d54d9-2z7qg" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.360475 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54a4d839-4622-4876-8c21-42cf2a2bf6d4-config\") pod \"route-controller-manager-dbf5d54d9-2z7qg\" (UID: \"54a4d839-4622-4876-8c21-42cf2a2bf6d4\") " pod="openshift-route-controller-manager/route-controller-manager-dbf5d54d9-2z7qg" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.360540 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv8jb\" (UniqueName: \"kubernetes.io/projected/54a4d839-4622-4876-8c21-42cf2a2bf6d4-kube-api-access-zv8jb\") pod \"route-controller-manager-dbf5d54d9-2z7qg\" (UID: \"54a4d839-4622-4876-8c21-42cf2a2bf6d4\") " pod="openshift-route-controller-manager/route-controller-manager-dbf5d54d9-2z7qg" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.360565 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aca2b5df-7388-4393-86c3-65c19e0d6890-client-ca\") pod \"controller-manager-866bd7f759-hrncf\" (UID: \"aca2b5df-7388-4393-86c3-65c19e0d6890\") " pod="openshift-controller-manager/controller-manager-866bd7f759-hrncf" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.360585 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p87zk\" (UniqueName: \"kubernetes.io/projected/aca2b5df-7388-4393-86c3-65c19e0d6890-kube-api-access-p87zk\") pod \"controller-manager-866bd7f759-hrncf\" (UID: \"aca2b5df-7388-4393-86c3-65c19e0d6890\") " pod="openshift-controller-manager/controller-manager-866bd7f759-hrncf" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.460637 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-866bd7f759-hrncf"] Dec 05 01:13:39 crc kubenswrapper[4990]: E1205 01:13:39.461065 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-p87zk proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-866bd7f759-hrncf" podUID="aca2b5df-7388-4393-86c3-65c19e0d6890" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.461207 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv8jb\" (UniqueName: \"kubernetes.io/projected/54a4d839-4622-4876-8c21-42cf2a2bf6d4-kube-api-access-zv8jb\") pod \"route-controller-manager-dbf5d54d9-2z7qg\" (UID: \"54a4d839-4622-4876-8c21-42cf2a2bf6d4\") " pod="openshift-route-controller-manager/route-controller-manager-dbf5d54d9-2z7qg" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.461248 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aca2b5df-7388-4393-86c3-65c19e0d6890-client-ca\") pod \"controller-manager-866bd7f759-hrncf\" (UID: \"aca2b5df-7388-4393-86c3-65c19e0d6890\") " pod="openshift-controller-manager/controller-manager-866bd7f759-hrncf" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.461267 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p87zk\" (UniqueName: \"kubernetes.io/projected/aca2b5df-7388-4393-86c3-65c19e0d6890-kube-api-access-p87zk\") pod \"controller-manager-866bd7f759-hrncf\" (UID: \"aca2b5df-7388-4393-86c3-65c19e0d6890\") " pod="openshift-controller-manager/controller-manager-866bd7f759-hrncf" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.461288 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aca2b5df-7388-4393-86c3-65c19e0d6890-serving-cert\") pod \"controller-manager-866bd7f759-hrncf\" (UID: \"aca2b5df-7388-4393-86c3-65c19e0d6890\") " pod="openshift-controller-manager/controller-manager-866bd7f759-hrncf" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.461311 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aca2b5df-7388-4393-86c3-65c19e0d6890-proxy-ca-bundles\") pod \"controller-manager-866bd7f759-hrncf\" (UID: \"aca2b5df-7388-4393-86c3-65c19e0d6890\") " pod="openshift-controller-manager/controller-manager-866bd7f759-hrncf" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.461343 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aca2b5df-7388-4393-86c3-65c19e0d6890-config\") pod \"controller-manager-866bd7f759-hrncf\" (UID: \"aca2b5df-7388-4393-86c3-65c19e0d6890\") " pod="openshift-controller-manager/controller-manager-866bd7f759-hrncf" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.461383 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54a4d839-4622-4876-8c21-42cf2a2bf6d4-serving-cert\") pod \"route-controller-manager-dbf5d54d9-2z7qg\" (UID: \"54a4d839-4622-4876-8c21-42cf2a2bf6d4\") " pod="openshift-route-controller-manager/route-controller-manager-dbf5d54d9-2z7qg" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.461419 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54a4d839-4622-4876-8c21-42cf2a2bf6d4-client-ca\") pod \"route-controller-manager-dbf5d54d9-2z7qg\" (UID: \"54a4d839-4622-4876-8c21-42cf2a2bf6d4\") " pod="openshift-route-controller-manager/route-controller-manager-dbf5d54d9-2z7qg" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.461436 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54a4d839-4622-4876-8c21-42cf2a2bf6d4-config\") pod \"route-controller-manager-dbf5d54d9-2z7qg\" (UID: \"54a4d839-4622-4876-8c21-42cf2a2bf6d4\") " pod="openshift-route-controller-manager/route-controller-manager-dbf5d54d9-2z7qg" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.463571 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54a4d839-4622-4876-8c21-42cf2a2bf6d4-client-ca\") pod \"route-controller-manager-dbf5d54d9-2z7qg\" (UID: \"54a4d839-4622-4876-8c21-42cf2a2bf6d4\") " pod="openshift-route-controller-manager/route-controller-manager-dbf5d54d9-2z7qg" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.463573 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aca2b5df-7388-4393-86c3-65c19e0d6890-client-ca\") pod \"controller-manager-866bd7f759-hrncf\" (UID: \"aca2b5df-7388-4393-86c3-65c19e0d6890\") " pod="openshift-controller-manager/controller-manager-866bd7f759-hrncf" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.463725 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aca2b5df-7388-4393-86c3-65c19e0d6890-proxy-ca-bundles\") pod \"controller-manager-866bd7f759-hrncf\" (UID: \"aca2b5df-7388-4393-86c3-65c19e0d6890\") " pod="openshift-controller-manager/controller-manager-866bd7f759-hrncf" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.464662 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54a4d839-4622-4876-8c21-42cf2a2bf6d4-config\") pod \"route-controller-manager-dbf5d54d9-2z7qg\" (UID: \"54a4d839-4622-4876-8c21-42cf2a2bf6d4\") " pod="openshift-route-controller-manager/route-controller-manager-dbf5d54d9-2z7qg" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.466030 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aca2b5df-7388-4393-86c3-65c19e0d6890-config\") pod \"controller-manager-866bd7f759-hrncf\" (UID: \"aca2b5df-7388-4393-86c3-65c19e0d6890\") " pod="openshift-controller-manager/controller-manager-866bd7f759-hrncf" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.476400 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aca2b5df-7388-4393-86c3-65c19e0d6890-serving-cert\") pod \"controller-manager-866bd7f759-hrncf\" (UID: \"aca2b5df-7388-4393-86c3-65c19e0d6890\") " pod="openshift-controller-manager/controller-manager-866bd7f759-hrncf" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.476882 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54a4d839-4622-4876-8c21-42cf2a2bf6d4-serving-cert\") pod \"route-controller-manager-dbf5d54d9-2z7qg\" (UID: \"54a4d839-4622-4876-8c21-42cf2a2bf6d4\") " pod="openshift-route-controller-manager/route-controller-manager-dbf5d54d9-2z7qg" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.487334 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv8jb\" (UniqueName: \"kubernetes.io/projected/54a4d839-4622-4876-8c21-42cf2a2bf6d4-kube-api-access-zv8jb\") pod \"route-controller-manager-dbf5d54d9-2z7qg\" (UID: \"54a4d839-4622-4876-8c21-42cf2a2bf6d4\") " pod="openshift-route-controller-manager/route-controller-manager-dbf5d54d9-2z7qg" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.488001 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p87zk\" (UniqueName: \"kubernetes.io/projected/aca2b5df-7388-4393-86c3-65c19e0d6890-kube-api-access-p87zk\") pod \"controller-manager-866bd7f759-hrncf\" (UID: \"aca2b5df-7388-4393-86c3-65c19e0d6890\") " pod="openshift-controller-manager/controller-manager-866bd7f759-hrncf" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.557237 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dbf5d54d9-2z7qg" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.789346 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dbf5d54d9-2z7qg"] Dec 05 01:13:39 crc kubenswrapper[4990]: W1205 01:13:39.799932 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54a4d839_4622_4876_8c21_42cf2a2bf6d4.slice/crio-97724c16af380f644d39024327ffb6c92afeff0f1061d45123a538646d9d7a1b WatchSource:0}: Error finding container 97724c16af380f644d39024327ffb6c92afeff0f1061d45123a538646d9d7a1b: Status 404 returned error can't find the container with id 97724c16af380f644d39024327ffb6c92afeff0f1061d45123a538646d9d7a1b Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.938845 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73009b29-5f92-4552-969c-669c459575ae" path="/var/lib/kubelet/pods/73009b29-5f92-4552-969c-669c459575ae/volumes" Dec 05 01:13:39 crc kubenswrapper[4990]: I1205 01:13:39.939728 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbf181f5-152c-4424-9206-9b2981b901ac" path="/var/lib/kubelet/pods/bbf181f5-152c-4424-9206-9b2981b901ac/volumes" Dec 05 01:13:40 crc kubenswrapper[4990]: I1205 01:13:40.281246 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dbf5d54d9-2z7qg" event={"ID":"54a4d839-4622-4876-8c21-42cf2a2bf6d4","Type":"ContainerStarted","Data":"bf4bb104c5adcffc97e05d0486178590713e5022c0413ad682d4a31bb1c55085"} Dec 05 01:13:40 crc kubenswrapper[4990]: I1205 01:13:40.281680 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dbf5d54d9-2z7qg" event={"ID":"54a4d839-4622-4876-8c21-42cf2a2bf6d4","Type":"ContainerStarted","Data":"97724c16af380f644d39024327ffb6c92afeff0f1061d45123a538646d9d7a1b"} Dec 05 01:13:40 crc kubenswrapper[4990]: I1205 01:13:40.281827 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-dbf5d54d9-2z7qg" Dec 05 01:13:40 crc kubenswrapper[4990]: I1205 01:13:40.286709 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-866bd7f759-hrncf" Dec 05 01:13:40 crc kubenswrapper[4990]: I1205 01:13:40.297553 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-866bd7f759-hrncf" Dec 05 01:13:40 crc kubenswrapper[4990]: I1205 01:13:40.309685 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-dbf5d54d9-2z7qg" podStartSLOduration=1.309650162 podStartE2EDuration="1.309650162s" podCreationTimestamp="2025-12-05 01:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:13:40.30261749 +0000 UTC m=+318.678832921" watchObservedRunningTime="2025-12-05 01:13:40.309650162 +0000 UTC m=+318.685865553" Dec 05 01:13:40 crc kubenswrapper[4990]: I1205 01:13:40.417003 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-dbf5d54d9-2z7qg" Dec 05 01:13:40 crc kubenswrapper[4990]: I1205 01:13:40.474189 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aca2b5df-7388-4393-86c3-65c19e0d6890-client-ca\") pod \"aca2b5df-7388-4393-86c3-65c19e0d6890\" (UID: \"aca2b5df-7388-4393-86c3-65c19e0d6890\") " Dec 05 01:13:40 crc kubenswrapper[4990]: I1205 01:13:40.474255 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aca2b5df-7388-4393-86c3-65c19e0d6890-serving-cert\") pod \"aca2b5df-7388-4393-86c3-65c19e0d6890\" (UID: \"aca2b5df-7388-4393-86c3-65c19e0d6890\") " Dec 05 01:13:40 crc kubenswrapper[4990]: I1205 01:13:40.474321 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p87zk\" (UniqueName: \"kubernetes.io/projected/aca2b5df-7388-4393-86c3-65c19e0d6890-kube-api-access-p87zk\") pod \"aca2b5df-7388-4393-86c3-65c19e0d6890\" (UID: \"aca2b5df-7388-4393-86c3-65c19e0d6890\") " Dec 05 01:13:40 crc kubenswrapper[4990]: I1205 01:13:40.474338 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aca2b5df-7388-4393-86c3-65c19e0d6890-proxy-ca-bundles\") pod \"aca2b5df-7388-4393-86c3-65c19e0d6890\" (UID: \"aca2b5df-7388-4393-86c3-65c19e0d6890\") " Dec 05 01:13:40 crc kubenswrapper[4990]: I1205 01:13:40.474357 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aca2b5df-7388-4393-86c3-65c19e0d6890-config\") pod \"aca2b5df-7388-4393-86c3-65c19e0d6890\" (UID: \"aca2b5df-7388-4393-86c3-65c19e0d6890\") " Dec 05 01:13:40 crc kubenswrapper[4990]: I1205 01:13:40.474762 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aca2b5df-7388-4393-86c3-65c19e0d6890-client-ca" (OuterVolumeSpecName: "client-ca") pod "aca2b5df-7388-4393-86c3-65c19e0d6890" (UID: "aca2b5df-7388-4393-86c3-65c19e0d6890"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:13:40 crc kubenswrapper[4990]: I1205 01:13:40.475397 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aca2b5df-7388-4393-86c3-65c19e0d6890-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "aca2b5df-7388-4393-86c3-65c19e0d6890" (UID: "aca2b5df-7388-4393-86c3-65c19e0d6890"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:13:40 crc kubenswrapper[4990]: I1205 01:13:40.476056 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aca2b5df-7388-4393-86c3-65c19e0d6890-config" (OuterVolumeSpecName: "config") pod "aca2b5df-7388-4393-86c3-65c19e0d6890" (UID: "aca2b5df-7388-4393-86c3-65c19e0d6890"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:13:40 crc kubenswrapper[4990]: I1205 01:13:40.480621 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aca2b5df-7388-4393-86c3-65c19e0d6890-kube-api-access-p87zk" (OuterVolumeSpecName: "kube-api-access-p87zk") pod "aca2b5df-7388-4393-86c3-65c19e0d6890" (UID: "aca2b5df-7388-4393-86c3-65c19e0d6890"). InnerVolumeSpecName "kube-api-access-p87zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:13:40 crc kubenswrapper[4990]: I1205 01:13:40.482610 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca2b5df-7388-4393-86c3-65c19e0d6890-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "aca2b5df-7388-4393-86c3-65c19e0d6890" (UID: "aca2b5df-7388-4393-86c3-65c19e0d6890"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:13:40 crc kubenswrapper[4990]: I1205 01:13:40.575773 4990 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aca2b5df-7388-4393-86c3-65c19e0d6890-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:40 crc kubenswrapper[4990]: I1205 01:13:40.575806 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p87zk\" (UniqueName: \"kubernetes.io/projected/aca2b5df-7388-4393-86c3-65c19e0d6890-kube-api-access-p87zk\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:40 crc kubenswrapper[4990]: I1205 01:13:40.575820 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aca2b5df-7388-4393-86c3-65c19e0d6890-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:40 crc kubenswrapper[4990]: I1205 01:13:40.575828 4990 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aca2b5df-7388-4393-86c3-65c19e0d6890-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:40 crc kubenswrapper[4990]: I1205 01:13:40.575838 4990 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aca2b5df-7388-4393-86c3-65c19e0d6890-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:41 crc kubenswrapper[4990]: I1205 01:13:41.294003 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-866bd7f759-hrncf" Dec 05 01:13:41 crc kubenswrapper[4990]: I1205 01:13:41.364675 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7445b794c8-7pdc8"] Dec 05 01:13:41 crc kubenswrapper[4990]: I1205 01:13:41.366232 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7445b794c8-7pdc8" Dec 05 01:13:41 crc kubenswrapper[4990]: I1205 01:13:41.370292 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 05 01:13:41 crc kubenswrapper[4990]: I1205 01:13:41.371075 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 05 01:13:41 crc kubenswrapper[4990]: I1205 01:13:41.375194 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 05 01:13:41 crc kubenswrapper[4990]: I1205 01:13:41.375833 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 05 01:13:41 crc kubenswrapper[4990]: I1205 01:13:41.376282 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 05 01:13:41 crc kubenswrapper[4990]: I1205 01:13:41.379998 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-866bd7f759-hrncf"] Dec 05 01:13:41 crc kubenswrapper[4990]: I1205 01:13:41.384606 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-866bd7f759-hrncf"] Dec 05 01:13:41 crc kubenswrapper[4990]: I1205 01:13:41.385945 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 05 01:13:41 crc kubenswrapper[4990]: I1205 01:13:41.394044 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 05 01:13:41 crc kubenswrapper[4990]: I1205 01:13:41.396500 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7445b794c8-7pdc8"] Dec 05 01:13:41 crc kubenswrapper[4990]: I1205 01:13:41.486532 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxrzk\" (UniqueName: \"kubernetes.io/projected/d968ad55-8f58-4ac4-84dd-af086fff69b1-kube-api-access-lxrzk\") pod \"controller-manager-7445b794c8-7pdc8\" (UID: \"d968ad55-8f58-4ac4-84dd-af086fff69b1\") " pod="openshift-controller-manager/controller-manager-7445b794c8-7pdc8" Dec 05 01:13:41 crc kubenswrapper[4990]: I1205 01:13:41.486645 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d968ad55-8f58-4ac4-84dd-af086fff69b1-client-ca\") pod \"controller-manager-7445b794c8-7pdc8\" (UID: \"d968ad55-8f58-4ac4-84dd-af086fff69b1\") " pod="openshift-controller-manager/controller-manager-7445b794c8-7pdc8" Dec 05 01:13:41 crc kubenswrapper[4990]: I1205 01:13:41.486677 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d968ad55-8f58-4ac4-84dd-af086fff69b1-proxy-ca-bundles\") pod \"controller-manager-7445b794c8-7pdc8\" (UID: \"d968ad55-8f58-4ac4-84dd-af086fff69b1\") " pod="openshift-controller-manager/controller-manager-7445b794c8-7pdc8" Dec 05 01:13:41 crc kubenswrapper[4990]: I1205 01:13:41.486729 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d968ad55-8f58-4ac4-84dd-af086fff69b1-config\") pod \"controller-manager-7445b794c8-7pdc8\" (UID: \"d968ad55-8f58-4ac4-84dd-af086fff69b1\") " pod="openshift-controller-manager/controller-manager-7445b794c8-7pdc8" Dec 05 01:13:41 crc kubenswrapper[4990]: I1205 01:13:41.486783 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d968ad55-8f58-4ac4-84dd-af086fff69b1-serving-cert\") pod \"controller-manager-7445b794c8-7pdc8\" (UID: \"d968ad55-8f58-4ac4-84dd-af086fff69b1\") " pod="openshift-controller-manager/controller-manager-7445b794c8-7pdc8" Dec 05 01:13:41 crc kubenswrapper[4990]: I1205 01:13:41.587707 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d968ad55-8f58-4ac4-84dd-af086fff69b1-client-ca\") pod \"controller-manager-7445b794c8-7pdc8\" (UID: \"d968ad55-8f58-4ac4-84dd-af086fff69b1\") " pod="openshift-controller-manager/controller-manager-7445b794c8-7pdc8" Dec 05 01:13:41 crc kubenswrapper[4990]: I1205 01:13:41.587772 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d968ad55-8f58-4ac4-84dd-af086fff69b1-proxy-ca-bundles\") pod \"controller-manager-7445b794c8-7pdc8\" (UID: \"d968ad55-8f58-4ac4-84dd-af086fff69b1\") " pod="openshift-controller-manager/controller-manager-7445b794c8-7pdc8" Dec 05 01:13:41 crc kubenswrapper[4990]: I1205 01:13:41.587835 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d968ad55-8f58-4ac4-84dd-af086fff69b1-config\") pod \"controller-manager-7445b794c8-7pdc8\" (UID: \"d968ad55-8f58-4ac4-84dd-af086fff69b1\") " pod="openshift-controller-manager/controller-manager-7445b794c8-7pdc8" Dec 05 01:13:41 crc kubenswrapper[4990]: I1205 01:13:41.587892 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d968ad55-8f58-4ac4-84dd-af086fff69b1-serving-cert\") pod \"controller-manager-7445b794c8-7pdc8\" (UID: \"d968ad55-8f58-4ac4-84dd-af086fff69b1\") " pod="openshift-controller-manager/controller-manager-7445b794c8-7pdc8" Dec 05 01:13:41 crc kubenswrapper[4990]: I1205 01:13:41.587925 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxrzk\" (UniqueName: \"kubernetes.io/projected/d968ad55-8f58-4ac4-84dd-af086fff69b1-kube-api-access-lxrzk\") pod \"controller-manager-7445b794c8-7pdc8\" (UID: \"d968ad55-8f58-4ac4-84dd-af086fff69b1\") " pod="openshift-controller-manager/controller-manager-7445b794c8-7pdc8" Dec 05 01:13:41 crc kubenswrapper[4990]: I1205 01:13:41.589621 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d968ad55-8f58-4ac4-84dd-af086fff69b1-config\") pod \"controller-manager-7445b794c8-7pdc8\" (UID: \"d968ad55-8f58-4ac4-84dd-af086fff69b1\") " pod="openshift-controller-manager/controller-manager-7445b794c8-7pdc8" Dec 05 01:13:41 crc kubenswrapper[4990]: I1205 01:13:41.589743 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d968ad55-8f58-4ac4-84dd-af086fff69b1-client-ca\") pod \"controller-manager-7445b794c8-7pdc8\" (UID: \"d968ad55-8f58-4ac4-84dd-af086fff69b1\") " pod="openshift-controller-manager/controller-manager-7445b794c8-7pdc8" Dec 05 01:13:41 crc kubenswrapper[4990]: I1205 01:13:41.590131 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d968ad55-8f58-4ac4-84dd-af086fff69b1-proxy-ca-bundles\") pod \"controller-manager-7445b794c8-7pdc8\" (UID: \"d968ad55-8f58-4ac4-84dd-af086fff69b1\") " pod="openshift-controller-manager/controller-manager-7445b794c8-7pdc8" Dec 05 01:13:41 crc kubenswrapper[4990]: I1205 01:13:41.594317 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d968ad55-8f58-4ac4-84dd-af086fff69b1-serving-cert\") pod \"controller-manager-7445b794c8-7pdc8\" (UID: \"d968ad55-8f58-4ac4-84dd-af086fff69b1\") " pod="openshift-controller-manager/controller-manager-7445b794c8-7pdc8" Dec 05 01:13:41 crc kubenswrapper[4990]: I1205 01:13:41.604277 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxrzk\" (UniqueName: \"kubernetes.io/projected/d968ad55-8f58-4ac4-84dd-af086fff69b1-kube-api-access-lxrzk\") pod \"controller-manager-7445b794c8-7pdc8\" (UID: \"d968ad55-8f58-4ac4-84dd-af086fff69b1\") " pod="openshift-controller-manager/controller-manager-7445b794c8-7pdc8" Dec 05 01:13:41 crc kubenswrapper[4990]: I1205 01:13:41.690926 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7445b794c8-7pdc8" Dec 05 01:13:41 crc kubenswrapper[4990]: I1205 01:13:41.862778 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7445b794c8-7pdc8"] Dec 05 01:13:41 crc kubenswrapper[4990]: I1205 01:13:41.873460 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dbf5d54d9-2z7qg"] Dec 05 01:13:41 crc kubenswrapper[4990]: I1205 01:13:41.937577 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aca2b5df-7388-4393-86c3-65c19e0d6890" path="/var/lib/kubelet/pods/aca2b5df-7388-4393-86c3-65c19e0d6890/volumes" Dec 05 01:13:41 crc kubenswrapper[4990]: I1205 01:13:41.951105 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7445b794c8-7pdc8"] Dec 05 01:13:41 crc kubenswrapper[4990]: W1205 01:13:41.957094 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd968ad55_8f58_4ac4_84dd_af086fff69b1.slice/crio-cee7332178b2e6d1405cc21dae6cccc45d7c342f590ef55ac857c9cb67cc0fd9 WatchSource:0}: Error finding container cee7332178b2e6d1405cc21dae6cccc45d7c342f590ef55ac857c9cb67cc0fd9: Status 404 returned error can't find the container with id cee7332178b2e6d1405cc21dae6cccc45d7c342f590ef55ac857c9cb67cc0fd9 Dec 05 01:13:42 crc kubenswrapper[4990]: I1205 01:13:42.302010 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7445b794c8-7pdc8" event={"ID":"d968ad55-8f58-4ac4-84dd-af086fff69b1","Type":"ContainerStarted","Data":"3bdd82e8dbae879646a625c37ed52cf8b596db150a178ded20dd443eb3974525"} Dec 05 01:13:42 crc kubenswrapper[4990]: I1205 01:13:42.302074 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7445b794c8-7pdc8" event={"ID":"d968ad55-8f58-4ac4-84dd-af086fff69b1","Type":"ContainerStarted","Data":"cee7332178b2e6d1405cc21dae6cccc45d7c342f590ef55ac857c9cb67cc0fd9"} Dec 05 01:13:42 crc kubenswrapper[4990]: I1205 01:13:42.302118 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7445b794c8-7pdc8" podUID="d968ad55-8f58-4ac4-84dd-af086fff69b1" containerName="controller-manager" containerID="cri-o://3bdd82e8dbae879646a625c37ed52cf8b596db150a178ded20dd443eb3974525" gracePeriod=30 Dec 05 01:13:42 crc kubenswrapper[4990]: I1205 01:13:42.302509 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7445b794c8-7pdc8" Dec 05 01:13:42 crc kubenswrapper[4990]: I1205 01:13:42.319380 4990 patch_prober.go:28] interesting pod/controller-manager-7445b794c8-7pdc8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": read tcp 10.217.0.2:60326->10.217.0.58:8443: read: connection reset by peer" start-of-body= Dec 05 01:13:42 crc kubenswrapper[4990]: I1205 01:13:42.319367 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7445b794c8-7pdc8" podStartSLOduration=3.319345453 podStartE2EDuration="3.319345453s" podCreationTimestamp="2025-12-05 01:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:13:42.318955902 +0000 UTC m=+320.695171273" watchObservedRunningTime="2025-12-05 01:13:42.319345453 +0000 UTC m=+320.695560824" Dec 05 01:13:42 crc kubenswrapper[4990]: I1205 01:13:42.319459 4990 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7445b794c8-7pdc8" podUID="d968ad55-8f58-4ac4-84dd-af086fff69b1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": read tcp 10.217.0.2:60326->10.217.0.58:8443: read: connection reset by peer" Dec 05 01:13:42 crc kubenswrapper[4990]: I1205 01:13:42.672110 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7445b794c8-7pdc8" Dec 05 01:13:42 crc kubenswrapper[4990]: I1205 01:13:42.804443 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d968ad55-8f58-4ac4-84dd-af086fff69b1-serving-cert\") pod \"d968ad55-8f58-4ac4-84dd-af086fff69b1\" (UID: \"d968ad55-8f58-4ac4-84dd-af086fff69b1\") " Dec 05 01:13:42 crc kubenswrapper[4990]: I1205 01:13:42.804523 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d968ad55-8f58-4ac4-84dd-af086fff69b1-proxy-ca-bundles\") pod \"d968ad55-8f58-4ac4-84dd-af086fff69b1\" (UID: \"d968ad55-8f58-4ac4-84dd-af086fff69b1\") " Dec 05 01:13:42 crc kubenswrapper[4990]: I1205 01:13:42.804559 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d968ad55-8f58-4ac4-84dd-af086fff69b1-config\") pod \"d968ad55-8f58-4ac4-84dd-af086fff69b1\" (UID: \"d968ad55-8f58-4ac4-84dd-af086fff69b1\") " Dec 05 01:13:42 crc kubenswrapper[4990]: I1205 01:13:42.804636 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d968ad55-8f58-4ac4-84dd-af086fff69b1-client-ca\") pod \"d968ad55-8f58-4ac4-84dd-af086fff69b1\" (UID: \"d968ad55-8f58-4ac4-84dd-af086fff69b1\") " Dec 05 01:13:42 crc kubenswrapper[4990]: I1205 01:13:42.804665 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxrzk\" (UniqueName: \"kubernetes.io/projected/d968ad55-8f58-4ac4-84dd-af086fff69b1-kube-api-access-lxrzk\") pod \"d968ad55-8f58-4ac4-84dd-af086fff69b1\" (UID: \"d968ad55-8f58-4ac4-84dd-af086fff69b1\") " Dec 05 01:13:42 crc kubenswrapper[4990]: I1205 01:13:42.805674 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d968ad55-8f58-4ac4-84dd-af086fff69b1-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d968ad55-8f58-4ac4-84dd-af086fff69b1" (UID: "d968ad55-8f58-4ac4-84dd-af086fff69b1"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:13:42 crc kubenswrapper[4990]: I1205 01:13:42.805701 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d968ad55-8f58-4ac4-84dd-af086fff69b1-client-ca" (OuterVolumeSpecName: "client-ca") pod "d968ad55-8f58-4ac4-84dd-af086fff69b1" (UID: "d968ad55-8f58-4ac4-84dd-af086fff69b1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:13:42 crc kubenswrapper[4990]: I1205 01:13:42.805827 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d968ad55-8f58-4ac4-84dd-af086fff69b1-config" (OuterVolumeSpecName: "config") pod "d968ad55-8f58-4ac4-84dd-af086fff69b1" (UID: "d968ad55-8f58-4ac4-84dd-af086fff69b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:13:42 crc kubenswrapper[4990]: I1205 01:13:42.812868 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d968ad55-8f58-4ac4-84dd-af086fff69b1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d968ad55-8f58-4ac4-84dd-af086fff69b1" (UID: "d968ad55-8f58-4ac4-84dd-af086fff69b1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:13:42 crc kubenswrapper[4990]: I1205 01:13:42.813360 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d968ad55-8f58-4ac4-84dd-af086fff69b1-kube-api-access-lxrzk" (OuterVolumeSpecName: "kube-api-access-lxrzk") pod "d968ad55-8f58-4ac4-84dd-af086fff69b1" (UID: "d968ad55-8f58-4ac4-84dd-af086fff69b1"). InnerVolumeSpecName "kube-api-access-lxrzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:13:42 crc kubenswrapper[4990]: I1205 01:13:42.906535 4990 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d968ad55-8f58-4ac4-84dd-af086fff69b1-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:42 crc kubenswrapper[4990]: I1205 01:13:42.906576 4990 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d968ad55-8f58-4ac4-84dd-af086fff69b1-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:42 crc kubenswrapper[4990]: I1205 01:13:42.906597 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d968ad55-8f58-4ac4-84dd-af086fff69b1-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:42 crc kubenswrapper[4990]: I1205 01:13:42.906608 4990 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d968ad55-8f58-4ac4-84dd-af086fff69b1-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:42 crc kubenswrapper[4990]: I1205 01:13:42.906620 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxrzk\" (UniqueName: \"kubernetes.io/projected/d968ad55-8f58-4ac4-84dd-af086fff69b1-kube-api-access-lxrzk\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.312437 4990 generic.go:334] "Generic (PLEG): container finished" podID="d968ad55-8f58-4ac4-84dd-af086fff69b1" containerID="3bdd82e8dbae879646a625c37ed52cf8b596db150a178ded20dd443eb3974525" exitCode=0 Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.312538 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7445b794c8-7pdc8" event={"ID":"d968ad55-8f58-4ac4-84dd-af086fff69b1","Type":"ContainerDied","Data":"3bdd82e8dbae879646a625c37ed52cf8b596db150a178ded20dd443eb3974525"} Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.312632 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7445b794c8-7pdc8" event={"ID":"d968ad55-8f58-4ac4-84dd-af086fff69b1","Type":"ContainerDied","Data":"cee7332178b2e6d1405cc21dae6cccc45d7c342f590ef55ac857c9cb67cc0fd9"} Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.312558 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7445b794c8-7pdc8" Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.312655 4990 scope.go:117] "RemoveContainer" containerID="3bdd82e8dbae879646a625c37ed52cf8b596db150a178ded20dd443eb3974525" Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.312762 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-dbf5d54d9-2z7qg" podUID="54a4d839-4622-4876-8c21-42cf2a2bf6d4" containerName="route-controller-manager" containerID="cri-o://bf4bb104c5adcffc97e05d0486178590713e5022c0413ad682d4a31bb1c55085" gracePeriod=30 Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.341928 4990 scope.go:117] "RemoveContainer" containerID="3bdd82e8dbae879646a625c37ed52cf8b596db150a178ded20dd443eb3974525" Dec 05 01:13:43 crc kubenswrapper[4990]: E1205 01:13:43.342965 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bdd82e8dbae879646a625c37ed52cf8b596db150a178ded20dd443eb3974525\": container with ID starting with 3bdd82e8dbae879646a625c37ed52cf8b596db150a178ded20dd443eb3974525 not found: ID does not exist" containerID="3bdd82e8dbae879646a625c37ed52cf8b596db150a178ded20dd443eb3974525" Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.343026 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bdd82e8dbae879646a625c37ed52cf8b596db150a178ded20dd443eb3974525"} err="failed to get container status \"3bdd82e8dbae879646a625c37ed52cf8b596db150a178ded20dd443eb3974525\": rpc error: code = NotFound desc = could not find container \"3bdd82e8dbae879646a625c37ed52cf8b596db150a178ded20dd443eb3974525\": container with ID starting with 3bdd82e8dbae879646a625c37ed52cf8b596db150a178ded20dd443eb3974525 not found: ID does not exist" Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.359448 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7445b794c8-7pdc8"] Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.367097 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7445b794c8-7pdc8"] Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.729063 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dbf5d54d9-2z7qg" Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.854508 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b696d7b86-m9hcz"] Dec 05 01:13:43 crc kubenswrapper[4990]: E1205 01:13:43.854801 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d968ad55-8f58-4ac4-84dd-af086fff69b1" containerName="controller-manager" Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.854822 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="d968ad55-8f58-4ac4-84dd-af086fff69b1" containerName="controller-manager" Dec 05 01:13:43 crc kubenswrapper[4990]: E1205 01:13:43.854837 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54a4d839-4622-4876-8c21-42cf2a2bf6d4" containerName="route-controller-manager" Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.854848 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="54a4d839-4622-4876-8c21-42cf2a2bf6d4" containerName="route-controller-manager" Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.855036 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="54a4d839-4622-4876-8c21-42cf2a2bf6d4" containerName="route-controller-manager" Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.855056 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="d968ad55-8f58-4ac4-84dd-af086fff69b1" containerName="controller-manager" Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.855606 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b696d7b86-m9hcz" Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.862660 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6c9c949b98-fq9m7"] Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.863558 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c9c949b98-fq9m7" Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.866174 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.866269 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.866654 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.866873 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.867095 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.867301 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.876921 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b696d7b86-m9hcz"] Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.878121 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.891994 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6c9c949b98-fq9m7"] Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.920193 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54a4d839-4622-4876-8c21-42cf2a2bf6d4-serving-cert\") pod \"54a4d839-4622-4876-8c21-42cf2a2bf6d4\" (UID: \"54a4d839-4622-4876-8c21-42cf2a2bf6d4\") " Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.920457 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54a4d839-4622-4876-8c21-42cf2a2bf6d4-config\") pod \"54a4d839-4622-4876-8c21-42cf2a2bf6d4\" (UID: \"54a4d839-4622-4876-8c21-42cf2a2bf6d4\") " Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.920579 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zv8jb\" (UniqueName: \"kubernetes.io/projected/54a4d839-4622-4876-8c21-42cf2a2bf6d4-kube-api-access-zv8jb\") pod \"54a4d839-4622-4876-8c21-42cf2a2bf6d4\" (UID: \"54a4d839-4622-4876-8c21-42cf2a2bf6d4\") " Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.920696 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54a4d839-4622-4876-8c21-42cf2a2bf6d4-client-ca\") pod \"54a4d839-4622-4876-8c21-42cf2a2bf6d4\" (UID: \"54a4d839-4622-4876-8c21-42cf2a2bf6d4\") " Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.921306 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54a4d839-4622-4876-8c21-42cf2a2bf6d4-config" (OuterVolumeSpecName: "config") pod "54a4d839-4622-4876-8c21-42cf2a2bf6d4" (UID: "54a4d839-4622-4876-8c21-42cf2a2bf6d4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.921455 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54a4d839-4622-4876-8c21-42cf2a2bf6d4-client-ca" (OuterVolumeSpecName: "client-ca") pod "54a4d839-4622-4876-8c21-42cf2a2bf6d4" (UID: "54a4d839-4622-4876-8c21-42cf2a2bf6d4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.926668 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54a4d839-4622-4876-8c21-42cf2a2bf6d4-kube-api-access-zv8jb" (OuterVolumeSpecName: "kube-api-access-zv8jb") pod "54a4d839-4622-4876-8c21-42cf2a2bf6d4" (UID: "54a4d839-4622-4876-8c21-42cf2a2bf6d4"). InnerVolumeSpecName "kube-api-access-zv8jb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.926690 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54a4d839-4622-4876-8c21-42cf2a2bf6d4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "54a4d839-4622-4876-8c21-42cf2a2bf6d4" (UID: "54a4d839-4622-4876-8c21-42cf2a2bf6d4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:13:43 crc kubenswrapper[4990]: I1205 01:13:43.940636 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d968ad55-8f58-4ac4-84dd-af086fff69b1" path="/var/lib/kubelet/pods/d968ad55-8f58-4ac4-84dd-af086fff69b1/volumes" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.022563 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dd6628a-9d23-407b-b095-70f29d5ac082-config\") pod \"route-controller-manager-b696d7b86-m9hcz\" (UID: \"6dd6628a-9d23-407b-b095-70f29d5ac082\") " pod="openshift-route-controller-manager/route-controller-manager-b696d7b86-m9hcz" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.022631 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84e4d91c-0db2-4507-9398-2fe3fb16b045-proxy-ca-bundles\") pod \"controller-manager-6c9c949b98-fq9m7\" (UID: \"84e4d91c-0db2-4507-9398-2fe3fb16b045\") " pod="openshift-controller-manager/controller-manager-6c9c949b98-fq9m7" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.022707 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9sz6\" (UniqueName: \"kubernetes.io/projected/6dd6628a-9d23-407b-b095-70f29d5ac082-kube-api-access-f9sz6\") pod \"route-controller-manager-b696d7b86-m9hcz\" (UID: \"6dd6628a-9d23-407b-b095-70f29d5ac082\") " pod="openshift-route-controller-manager/route-controller-manager-b696d7b86-m9hcz" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.022763 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84e4d91c-0db2-4507-9398-2fe3fb16b045-config\") pod \"controller-manager-6c9c949b98-fq9m7\" (UID: \"84e4d91c-0db2-4507-9398-2fe3fb16b045\") " pod="openshift-controller-manager/controller-manager-6c9c949b98-fq9m7" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.022818 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6dd6628a-9d23-407b-b095-70f29d5ac082-client-ca\") pod \"route-controller-manager-b696d7b86-m9hcz\" (UID: \"6dd6628a-9d23-407b-b095-70f29d5ac082\") " pod="openshift-route-controller-manager/route-controller-manager-b696d7b86-m9hcz" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.022843 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84e4d91c-0db2-4507-9398-2fe3fb16b045-serving-cert\") pod \"controller-manager-6c9c949b98-fq9m7\" (UID: \"84e4d91c-0db2-4507-9398-2fe3fb16b045\") " pod="openshift-controller-manager/controller-manager-6c9c949b98-fq9m7" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.022894 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgdvx\" (UniqueName: \"kubernetes.io/projected/84e4d91c-0db2-4507-9398-2fe3fb16b045-kube-api-access-vgdvx\") pod \"controller-manager-6c9c949b98-fq9m7\" (UID: \"84e4d91c-0db2-4507-9398-2fe3fb16b045\") " pod="openshift-controller-manager/controller-manager-6c9c949b98-fq9m7" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.022923 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84e4d91c-0db2-4507-9398-2fe3fb16b045-client-ca\") pod \"controller-manager-6c9c949b98-fq9m7\" (UID: \"84e4d91c-0db2-4507-9398-2fe3fb16b045\") " pod="openshift-controller-manager/controller-manager-6c9c949b98-fq9m7" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.022971 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6dd6628a-9d23-407b-b095-70f29d5ac082-serving-cert\") pod \"route-controller-manager-b696d7b86-m9hcz\" (UID: \"6dd6628a-9d23-407b-b095-70f29d5ac082\") " pod="openshift-route-controller-manager/route-controller-manager-b696d7b86-m9hcz" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.023007 4990 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54a4d839-4622-4876-8c21-42cf2a2bf6d4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.023046 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54a4d839-4622-4876-8c21-42cf2a2bf6d4-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.023056 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zv8jb\" (UniqueName: \"kubernetes.io/projected/54a4d839-4622-4876-8c21-42cf2a2bf6d4-kube-api-access-zv8jb\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.023064 4990 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54a4d839-4622-4876-8c21-42cf2a2bf6d4-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.123740 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6dd6628a-9d23-407b-b095-70f29d5ac082-serving-cert\") pod \"route-controller-manager-b696d7b86-m9hcz\" (UID: \"6dd6628a-9d23-407b-b095-70f29d5ac082\") " pod="openshift-route-controller-manager/route-controller-manager-b696d7b86-m9hcz" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.123780 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84e4d91c-0db2-4507-9398-2fe3fb16b045-client-ca\") pod \"controller-manager-6c9c949b98-fq9m7\" (UID: \"84e4d91c-0db2-4507-9398-2fe3fb16b045\") " pod="openshift-controller-manager/controller-manager-6c9c949b98-fq9m7" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.123808 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dd6628a-9d23-407b-b095-70f29d5ac082-config\") pod \"route-controller-manager-b696d7b86-m9hcz\" (UID: \"6dd6628a-9d23-407b-b095-70f29d5ac082\") " pod="openshift-route-controller-manager/route-controller-manager-b696d7b86-m9hcz" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.123836 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84e4d91c-0db2-4507-9398-2fe3fb16b045-proxy-ca-bundles\") pod \"controller-manager-6c9c949b98-fq9m7\" (UID: \"84e4d91c-0db2-4507-9398-2fe3fb16b045\") " pod="openshift-controller-manager/controller-manager-6c9c949b98-fq9m7" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.123871 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9sz6\" (UniqueName: \"kubernetes.io/projected/6dd6628a-9d23-407b-b095-70f29d5ac082-kube-api-access-f9sz6\") pod \"route-controller-manager-b696d7b86-m9hcz\" (UID: \"6dd6628a-9d23-407b-b095-70f29d5ac082\") " pod="openshift-route-controller-manager/route-controller-manager-b696d7b86-m9hcz" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.123907 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84e4d91c-0db2-4507-9398-2fe3fb16b045-config\") pod \"controller-manager-6c9c949b98-fq9m7\" (UID: \"84e4d91c-0db2-4507-9398-2fe3fb16b045\") " pod="openshift-controller-manager/controller-manager-6c9c949b98-fq9m7" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.123943 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6dd6628a-9d23-407b-b095-70f29d5ac082-client-ca\") pod \"route-controller-manager-b696d7b86-m9hcz\" (UID: \"6dd6628a-9d23-407b-b095-70f29d5ac082\") " pod="openshift-route-controller-manager/route-controller-manager-b696d7b86-m9hcz" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.123977 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84e4d91c-0db2-4507-9398-2fe3fb16b045-serving-cert\") pod \"controller-manager-6c9c949b98-fq9m7\" (UID: \"84e4d91c-0db2-4507-9398-2fe3fb16b045\") " pod="openshift-controller-manager/controller-manager-6c9c949b98-fq9m7" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.124006 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgdvx\" (UniqueName: \"kubernetes.io/projected/84e4d91c-0db2-4507-9398-2fe3fb16b045-kube-api-access-vgdvx\") pod \"controller-manager-6c9c949b98-fq9m7\" (UID: \"84e4d91c-0db2-4507-9398-2fe3fb16b045\") " pod="openshift-controller-manager/controller-manager-6c9c949b98-fq9m7" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.125156 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6dd6628a-9d23-407b-b095-70f29d5ac082-client-ca\") pod \"route-controller-manager-b696d7b86-m9hcz\" (UID: \"6dd6628a-9d23-407b-b095-70f29d5ac082\") " pod="openshift-route-controller-manager/route-controller-manager-b696d7b86-m9hcz" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.125308 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dd6628a-9d23-407b-b095-70f29d5ac082-config\") pod \"route-controller-manager-b696d7b86-m9hcz\" (UID: \"6dd6628a-9d23-407b-b095-70f29d5ac082\") " pod="openshift-route-controller-manager/route-controller-manager-b696d7b86-m9hcz" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.125500 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84e4d91c-0db2-4507-9398-2fe3fb16b045-proxy-ca-bundles\") pod \"controller-manager-6c9c949b98-fq9m7\" (UID: \"84e4d91c-0db2-4507-9398-2fe3fb16b045\") " pod="openshift-controller-manager/controller-manager-6c9c949b98-fq9m7" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.125741 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84e4d91c-0db2-4507-9398-2fe3fb16b045-config\") pod \"controller-manager-6c9c949b98-fq9m7\" (UID: \"84e4d91c-0db2-4507-9398-2fe3fb16b045\") " pod="openshift-controller-manager/controller-manager-6c9c949b98-fq9m7" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.126471 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84e4d91c-0db2-4507-9398-2fe3fb16b045-client-ca\") pod \"controller-manager-6c9c949b98-fq9m7\" (UID: \"84e4d91c-0db2-4507-9398-2fe3fb16b045\") " pod="openshift-controller-manager/controller-manager-6c9c949b98-fq9m7" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.127835 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84e4d91c-0db2-4507-9398-2fe3fb16b045-serving-cert\") pod \"controller-manager-6c9c949b98-fq9m7\" (UID: \"84e4d91c-0db2-4507-9398-2fe3fb16b045\") " pod="openshift-controller-manager/controller-manager-6c9c949b98-fq9m7" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.129226 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6dd6628a-9d23-407b-b095-70f29d5ac082-serving-cert\") pod \"route-controller-manager-b696d7b86-m9hcz\" (UID: \"6dd6628a-9d23-407b-b095-70f29d5ac082\") " pod="openshift-route-controller-manager/route-controller-manager-b696d7b86-m9hcz" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.140949 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgdvx\" (UniqueName: \"kubernetes.io/projected/84e4d91c-0db2-4507-9398-2fe3fb16b045-kube-api-access-vgdvx\") pod \"controller-manager-6c9c949b98-fq9m7\" (UID: \"84e4d91c-0db2-4507-9398-2fe3fb16b045\") " pod="openshift-controller-manager/controller-manager-6c9c949b98-fq9m7" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.148799 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9sz6\" (UniqueName: \"kubernetes.io/projected/6dd6628a-9d23-407b-b095-70f29d5ac082-kube-api-access-f9sz6\") pod \"route-controller-manager-b696d7b86-m9hcz\" (UID: \"6dd6628a-9d23-407b-b095-70f29d5ac082\") " pod="openshift-route-controller-manager/route-controller-manager-b696d7b86-m9hcz" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.185862 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b696d7b86-m9hcz" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.201612 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c9c949b98-fq9m7" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.329673 4990 generic.go:334] "Generic (PLEG): container finished" podID="54a4d839-4622-4876-8c21-42cf2a2bf6d4" containerID="bf4bb104c5adcffc97e05d0486178590713e5022c0413ad682d4a31bb1c55085" exitCode=0 Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.329888 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dbf5d54d9-2z7qg" event={"ID":"54a4d839-4622-4876-8c21-42cf2a2bf6d4","Type":"ContainerDied","Data":"bf4bb104c5adcffc97e05d0486178590713e5022c0413ad682d4a31bb1c55085"} Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.330091 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dbf5d54d9-2z7qg" event={"ID":"54a4d839-4622-4876-8c21-42cf2a2bf6d4","Type":"ContainerDied","Data":"97724c16af380f644d39024327ffb6c92afeff0f1061d45123a538646d9d7a1b"} Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.330120 4990 scope.go:117] "RemoveContainer" containerID="bf4bb104c5adcffc97e05d0486178590713e5022c0413ad682d4a31bb1c55085" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.329980 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dbf5d54d9-2z7qg" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.367169 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dbf5d54d9-2z7qg"] Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.370876 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dbf5d54d9-2z7qg"] Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.381646 4990 scope.go:117] "RemoveContainer" containerID="bf4bb104c5adcffc97e05d0486178590713e5022c0413ad682d4a31bb1c55085" Dec 05 01:13:44 crc kubenswrapper[4990]: E1205 01:13:44.382079 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf4bb104c5adcffc97e05d0486178590713e5022c0413ad682d4a31bb1c55085\": container with ID starting with bf4bb104c5adcffc97e05d0486178590713e5022c0413ad682d4a31bb1c55085 not found: ID does not exist" containerID="bf4bb104c5adcffc97e05d0486178590713e5022c0413ad682d4a31bb1c55085" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.382107 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf4bb104c5adcffc97e05d0486178590713e5022c0413ad682d4a31bb1c55085"} err="failed to get container status \"bf4bb104c5adcffc97e05d0486178590713e5022c0413ad682d4a31bb1c55085\": rpc error: code = NotFound desc = could not find container \"bf4bb104c5adcffc97e05d0486178590713e5022c0413ad682d4a31bb1c55085\": container with ID starting with bf4bb104c5adcffc97e05d0486178590713e5022c0413ad682d4a31bb1c55085 not found: ID does not exist" Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.429328 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b696d7b86-m9hcz"] Dec 05 01:13:44 crc kubenswrapper[4990]: W1205 01:13:44.436671 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6dd6628a_9d23_407b_b095_70f29d5ac082.slice/crio-52d454b7ca06d7fcc989b2c32098060fba88edaf344e1d4de72b37121b8469c7 WatchSource:0}: Error finding container 52d454b7ca06d7fcc989b2c32098060fba88edaf344e1d4de72b37121b8469c7: Status 404 returned error can't find the container with id 52d454b7ca06d7fcc989b2c32098060fba88edaf344e1d4de72b37121b8469c7 Dec 05 01:13:44 crc kubenswrapper[4990]: I1205 01:13:44.438924 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6c9c949b98-fq9m7"] Dec 05 01:13:45 crc kubenswrapper[4990]: I1205 01:13:45.337444 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c9c949b98-fq9m7" event={"ID":"84e4d91c-0db2-4507-9398-2fe3fb16b045","Type":"ContainerStarted","Data":"9e9aef56ac03dca29f22560119f951390bed33d5806bda201e41c34aa26f6682"} Dec 05 01:13:45 crc kubenswrapper[4990]: I1205 01:13:45.338845 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6c9c949b98-fq9m7" Dec 05 01:13:45 crc kubenswrapper[4990]: I1205 01:13:45.338899 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c9c949b98-fq9m7" event={"ID":"84e4d91c-0db2-4507-9398-2fe3fb16b045","Type":"ContainerStarted","Data":"e5fca2c32fbb58ac59a26109f7eb917945eebb5552a1a9e2976570be94a8d301"} Dec 05 01:13:45 crc kubenswrapper[4990]: I1205 01:13:45.340316 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b696d7b86-m9hcz" event={"ID":"6dd6628a-9d23-407b-b095-70f29d5ac082","Type":"ContainerStarted","Data":"6992190e537ba8f5ed131b000d82e7cfe74b605673b8afd71e0f891046a64ef7"} Dec 05 01:13:45 crc kubenswrapper[4990]: I1205 01:13:45.340374 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b696d7b86-m9hcz" event={"ID":"6dd6628a-9d23-407b-b095-70f29d5ac082","Type":"ContainerStarted","Data":"52d454b7ca06d7fcc989b2c32098060fba88edaf344e1d4de72b37121b8469c7"} Dec 05 01:13:45 crc kubenswrapper[4990]: I1205 01:13:45.341317 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-b696d7b86-m9hcz" Dec 05 01:13:45 crc kubenswrapper[4990]: I1205 01:13:45.344056 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6c9c949b98-fq9m7" Dec 05 01:13:45 crc kubenswrapper[4990]: I1205 01:13:45.346544 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-b696d7b86-m9hcz" Dec 05 01:13:45 crc kubenswrapper[4990]: I1205 01:13:45.359163 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6c9c949b98-fq9m7" podStartSLOduration=4.359143154 podStartE2EDuration="4.359143154s" podCreationTimestamp="2025-12-05 01:13:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:13:45.355441468 +0000 UTC m=+323.731656849" watchObservedRunningTime="2025-12-05 01:13:45.359143154 +0000 UTC m=+323.735358515" Dec 05 01:13:45 crc kubenswrapper[4990]: I1205 01:13:45.370769 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-b696d7b86-m9hcz" podStartSLOduration=4.370749687 podStartE2EDuration="4.370749687s" podCreationTimestamp="2025-12-05 01:13:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:13:45.370567611 +0000 UTC m=+323.746782982" watchObservedRunningTime="2025-12-05 01:13:45.370749687 +0000 UTC m=+323.746965058" Dec 05 01:13:45 crc kubenswrapper[4990]: I1205 01:13:45.937386 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54a4d839-4622-4876-8c21-42cf2a2bf6d4" path="/var/lib/kubelet/pods/54a4d839-4622-4876-8c21-42cf2a2bf6d4/volumes" Dec 05 01:13:58 crc kubenswrapper[4990]: I1205 01:13:58.457027 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6c9c949b98-fq9m7"] Dec 05 01:13:58 crc kubenswrapper[4990]: I1205 01:13:58.458452 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6c9c949b98-fq9m7" podUID="84e4d91c-0db2-4507-9398-2fe3fb16b045" containerName="controller-manager" containerID="cri-o://9e9aef56ac03dca29f22560119f951390bed33d5806bda201e41c34aa26f6682" gracePeriod=30 Dec 05 01:13:58 crc kubenswrapper[4990]: I1205 01:13:58.519240 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b696d7b86-m9hcz"] Dec 05 01:13:58 crc kubenswrapper[4990]: I1205 01:13:58.519438 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-b696d7b86-m9hcz" podUID="6dd6628a-9d23-407b-b095-70f29d5ac082" containerName="route-controller-manager" containerID="cri-o://6992190e537ba8f5ed131b000d82e7cfe74b605673b8afd71e0f891046a64ef7" gracePeriod=30 Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.044248 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b696d7b86-m9hcz" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.050228 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c9c949b98-fq9m7" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.223318 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6dd6628a-9d23-407b-b095-70f29d5ac082-client-ca\") pod \"6dd6628a-9d23-407b-b095-70f29d5ac082\" (UID: \"6dd6628a-9d23-407b-b095-70f29d5ac082\") " Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.223371 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6dd6628a-9d23-407b-b095-70f29d5ac082-serving-cert\") pod \"6dd6628a-9d23-407b-b095-70f29d5ac082\" (UID: \"6dd6628a-9d23-407b-b095-70f29d5ac082\") " Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.223408 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9sz6\" (UniqueName: \"kubernetes.io/projected/6dd6628a-9d23-407b-b095-70f29d5ac082-kube-api-access-f9sz6\") pod \"6dd6628a-9d23-407b-b095-70f29d5ac082\" (UID: \"6dd6628a-9d23-407b-b095-70f29d5ac082\") " Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.223427 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84e4d91c-0db2-4507-9398-2fe3fb16b045-serving-cert\") pod \"84e4d91c-0db2-4507-9398-2fe3fb16b045\" (UID: \"84e4d91c-0db2-4507-9398-2fe3fb16b045\") " Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.223444 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84e4d91c-0db2-4507-9398-2fe3fb16b045-config\") pod \"84e4d91c-0db2-4507-9398-2fe3fb16b045\" (UID: \"84e4d91c-0db2-4507-9398-2fe3fb16b045\") " Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.223506 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgdvx\" (UniqueName: \"kubernetes.io/projected/84e4d91c-0db2-4507-9398-2fe3fb16b045-kube-api-access-vgdvx\") pod \"84e4d91c-0db2-4507-9398-2fe3fb16b045\" (UID: \"84e4d91c-0db2-4507-9398-2fe3fb16b045\") " Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.223531 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84e4d91c-0db2-4507-9398-2fe3fb16b045-client-ca\") pod \"84e4d91c-0db2-4507-9398-2fe3fb16b045\" (UID: \"84e4d91c-0db2-4507-9398-2fe3fb16b045\") " Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.223554 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dd6628a-9d23-407b-b095-70f29d5ac082-config\") pod \"6dd6628a-9d23-407b-b095-70f29d5ac082\" (UID: \"6dd6628a-9d23-407b-b095-70f29d5ac082\") " Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.223602 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84e4d91c-0db2-4507-9398-2fe3fb16b045-proxy-ca-bundles\") pod \"84e4d91c-0db2-4507-9398-2fe3fb16b045\" (UID: \"84e4d91c-0db2-4507-9398-2fe3fb16b045\") " Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.224376 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dd6628a-9d23-407b-b095-70f29d5ac082-config" (OuterVolumeSpecName: "config") pod "6dd6628a-9d23-407b-b095-70f29d5ac082" (UID: "6dd6628a-9d23-407b-b095-70f29d5ac082"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.224429 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dd6628a-9d23-407b-b095-70f29d5ac082-client-ca" (OuterVolumeSpecName: "client-ca") pod "6dd6628a-9d23-407b-b095-70f29d5ac082" (UID: "6dd6628a-9d23-407b-b095-70f29d5ac082"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.224670 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84e4d91c-0db2-4507-9398-2fe3fb16b045-client-ca" (OuterVolumeSpecName: "client-ca") pod "84e4d91c-0db2-4507-9398-2fe3fb16b045" (UID: "84e4d91c-0db2-4507-9398-2fe3fb16b045"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.224748 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84e4d91c-0db2-4507-9398-2fe3fb16b045-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "84e4d91c-0db2-4507-9398-2fe3fb16b045" (UID: "84e4d91c-0db2-4507-9398-2fe3fb16b045"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.224845 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84e4d91c-0db2-4507-9398-2fe3fb16b045-config" (OuterVolumeSpecName: "config") pod "84e4d91c-0db2-4507-9398-2fe3fb16b045" (UID: "84e4d91c-0db2-4507-9398-2fe3fb16b045"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.228575 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dd6628a-9d23-407b-b095-70f29d5ac082-kube-api-access-f9sz6" (OuterVolumeSpecName: "kube-api-access-f9sz6") pod "6dd6628a-9d23-407b-b095-70f29d5ac082" (UID: "6dd6628a-9d23-407b-b095-70f29d5ac082"). InnerVolumeSpecName "kube-api-access-f9sz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.228610 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84e4d91c-0db2-4507-9398-2fe3fb16b045-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "84e4d91c-0db2-4507-9398-2fe3fb16b045" (UID: "84e4d91c-0db2-4507-9398-2fe3fb16b045"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.228699 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dd6628a-9d23-407b-b095-70f29d5ac082-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6dd6628a-9d23-407b-b095-70f29d5ac082" (UID: "6dd6628a-9d23-407b-b095-70f29d5ac082"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.228880 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84e4d91c-0db2-4507-9398-2fe3fb16b045-kube-api-access-vgdvx" (OuterVolumeSpecName: "kube-api-access-vgdvx") pod "84e4d91c-0db2-4507-9398-2fe3fb16b045" (UID: "84e4d91c-0db2-4507-9398-2fe3fb16b045"). InnerVolumeSpecName "kube-api-access-vgdvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.324630 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9sz6\" (UniqueName: \"kubernetes.io/projected/6dd6628a-9d23-407b-b095-70f29d5ac082-kube-api-access-f9sz6\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.324668 4990 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84e4d91c-0db2-4507-9398-2fe3fb16b045-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.324684 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84e4d91c-0db2-4507-9398-2fe3fb16b045-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.324697 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgdvx\" (UniqueName: \"kubernetes.io/projected/84e4d91c-0db2-4507-9398-2fe3fb16b045-kube-api-access-vgdvx\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.324707 4990 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84e4d91c-0db2-4507-9398-2fe3fb16b045-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.324718 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dd6628a-9d23-407b-b095-70f29d5ac082-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.324728 4990 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84e4d91c-0db2-4507-9398-2fe3fb16b045-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.324739 4990 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6dd6628a-9d23-407b-b095-70f29d5ac082-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.324749 4990 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6dd6628a-9d23-407b-b095-70f29d5ac082-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.442812 4990 generic.go:334] "Generic (PLEG): container finished" podID="84e4d91c-0db2-4507-9398-2fe3fb16b045" containerID="9e9aef56ac03dca29f22560119f951390bed33d5806bda201e41c34aa26f6682" exitCode=0 Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.442965 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c9c949b98-fq9m7" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.442977 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c9c949b98-fq9m7" event={"ID":"84e4d91c-0db2-4507-9398-2fe3fb16b045","Type":"ContainerDied","Data":"9e9aef56ac03dca29f22560119f951390bed33d5806bda201e41c34aa26f6682"} Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.443530 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c9c949b98-fq9m7" event={"ID":"84e4d91c-0db2-4507-9398-2fe3fb16b045","Type":"ContainerDied","Data":"e5fca2c32fbb58ac59a26109f7eb917945eebb5552a1a9e2976570be94a8d301"} Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.443575 4990 scope.go:117] "RemoveContainer" containerID="9e9aef56ac03dca29f22560119f951390bed33d5806bda201e41c34aa26f6682" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.446155 4990 generic.go:334] "Generic (PLEG): container finished" podID="6dd6628a-9d23-407b-b095-70f29d5ac082" containerID="6992190e537ba8f5ed131b000d82e7cfe74b605673b8afd71e0f891046a64ef7" exitCode=0 Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.446205 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b696d7b86-m9hcz" event={"ID":"6dd6628a-9d23-407b-b095-70f29d5ac082","Type":"ContainerDied","Data":"6992190e537ba8f5ed131b000d82e7cfe74b605673b8afd71e0f891046a64ef7"} Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.446232 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b696d7b86-m9hcz" event={"ID":"6dd6628a-9d23-407b-b095-70f29d5ac082","Type":"ContainerDied","Data":"52d454b7ca06d7fcc989b2c32098060fba88edaf344e1d4de72b37121b8469c7"} Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.446254 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b696d7b86-m9hcz" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.468733 4990 scope.go:117] "RemoveContainer" containerID="9e9aef56ac03dca29f22560119f951390bed33d5806bda201e41c34aa26f6682" Dec 05 01:13:59 crc kubenswrapper[4990]: E1205 01:13:59.469300 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e9aef56ac03dca29f22560119f951390bed33d5806bda201e41c34aa26f6682\": container with ID starting with 9e9aef56ac03dca29f22560119f951390bed33d5806bda201e41c34aa26f6682 not found: ID does not exist" containerID="9e9aef56ac03dca29f22560119f951390bed33d5806bda201e41c34aa26f6682" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.469346 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e9aef56ac03dca29f22560119f951390bed33d5806bda201e41c34aa26f6682"} err="failed to get container status \"9e9aef56ac03dca29f22560119f951390bed33d5806bda201e41c34aa26f6682\": rpc error: code = NotFound desc = could not find container \"9e9aef56ac03dca29f22560119f951390bed33d5806bda201e41c34aa26f6682\": container with ID starting with 9e9aef56ac03dca29f22560119f951390bed33d5806bda201e41c34aa26f6682 not found: ID does not exist" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.469383 4990 scope.go:117] "RemoveContainer" containerID="6992190e537ba8f5ed131b000d82e7cfe74b605673b8afd71e0f891046a64ef7" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.489853 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b696d7b86-m9hcz"] Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.497396 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b696d7b86-m9hcz"] Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.500369 4990 scope.go:117] "RemoveContainer" containerID="6992190e537ba8f5ed131b000d82e7cfe74b605673b8afd71e0f891046a64ef7" Dec 05 01:13:59 crc kubenswrapper[4990]: E1205 01:13:59.501559 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6992190e537ba8f5ed131b000d82e7cfe74b605673b8afd71e0f891046a64ef7\": container with ID starting with 6992190e537ba8f5ed131b000d82e7cfe74b605673b8afd71e0f891046a64ef7 not found: ID does not exist" containerID="6992190e537ba8f5ed131b000d82e7cfe74b605673b8afd71e0f891046a64ef7" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.501622 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6992190e537ba8f5ed131b000d82e7cfe74b605673b8afd71e0f891046a64ef7"} err="failed to get container status \"6992190e537ba8f5ed131b000d82e7cfe74b605673b8afd71e0f891046a64ef7\": rpc error: code = NotFound desc = could not find container \"6992190e537ba8f5ed131b000d82e7cfe74b605673b8afd71e0f891046a64ef7\": container with ID starting with 6992190e537ba8f5ed131b000d82e7cfe74b605673b8afd71e0f891046a64ef7 not found: ID does not exist" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.505680 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6c9c949b98-fq9m7"] Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.512179 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6c9c949b98-fq9m7"] Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.863793 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cfdb88d88-f2b56"] Dec 05 01:13:59 crc kubenswrapper[4990]: E1205 01:13:59.864281 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84e4d91c-0db2-4507-9398-2fe3fb16b045" containerName="controller-manager" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.864313 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e4d91c-0db2-4507-9398-2fe3fb16b045" containerName="controller-manager" Dec 05 01:13:59 crc kubenswrapper[4990]: E1205 01:13:59.864342 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dd6628a-9d23-407b-b095-70f29d5ac082" containerName="route-controller-manager" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.864356 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dd6628a-9d23-407b-b095-70f29d5ac082" containerName="route-controller-manager" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.864633 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="84e4d91c-0db2-4507-9398-2fe3fb16b045" containerName="controller-manager" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.864674 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dd6628a-9d23-407b-b095-70f29d5ac082" containerName="route-controller-manager" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.865276 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cfdb88d88-f2b56" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.867338 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.869872 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6645475bc5-d9brr"] Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.870883 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6645475bc5-d9brr" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.872297 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.872521 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.872673 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.873083 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.873289 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.880911 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.881128 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.881589 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.885410 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6645475bc5-d9brr"] Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.886824 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.887072 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.887317 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.893117 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cfdb88d88-f2b56"] Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.894110 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.938940 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dd6628a-9d23-407b-b095-70f29d5ac082" path="/var/lib/kubelet/pods/6dd6628a-9d23-407b-b095-70f29d5ac082/volumes" Dec 05 01:13:59 crc kubenswrapper[4990]: I1205 01:13:59.940071 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84e4d91c-0db2-4507-9398-2fe3fb16b045" path="/var/lib/kubelet/pods/84e4d91c-0db2-4507-9398-2fe3fb16b045/volumes" Dec 05 01:14:00 crc kubenswrapper[4990]: I1205 01:14:00.034013 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2n2j\" (UniqueName: \"kubernetes.io/projected/c641f7a7-144a-48f9-a955-6b87f29bd91b-kube-api-access-b2n2j\") pod \"route-controller-manager-7cfdb88d88-f2b56\" (UID: \"c641f7a7-144a-48f9-a955-6b87f29bd91b\") " pod="openshift-route-controller-manager/route-controller-manager-7cfdb88d88-f2b56" Dec 05 01:14:00 crc kubenswrapper[4990]: I1205 01:14:00.034134 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03e038f5-34d9-4a34-8317-47ebc0e35b66-client-ca\") pod \"controller-manager-6645475bc5-d9brr\" (UID: \"03e038f5-34d9-4a34-8317-47ebc0e35b66\") " pod="openshift-controller-manager/controller-manager-6645475bc5-d9brr" Dec 05 01:14:00 crc kubenswrapper[4990]: I1205 01:14:00.034183 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03e038f5-34d9-4a34-8317-47ebc0e35b66-serving-cert\") pod \"controller-manager-6645475bc5-d9brr\" (UID: \"03e038f5-34d9-4a34-8317-47ebc0e35b66\") " pod="openshift-controller-manager/controller-manager-6645475bc5-d9brr" Dec 05 01:14:00 crc kubenswrapper[4990]: I1205 01:14:00.034217 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c641f7a7-144a-48f9-a955-6b87f29bd91b-serving-cert\") pod \"route-controller-manager-7cfdb88d88-f2b56\" (UID: \"c641f7a7-144a-48f9-a955-6b87f29bd91b\") " pod="openshift-route-controller-manager/route-controller-manager-7cfdb88d88-f2b56" Dec 05 01:14:00 crc kubenswrapper[4990]: I1205 01:14:00.034465 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03e038f5-34d9-4a34-8317-47ebc0e35b66-proxy-ca-bundles\") pod \"controller-manager-6645475bc5-d9brr\" (UID: \"03e038f5-34d9-4a34-8317-47ebc0e35b66\") " pod="openshift-controller-manager/controller-manager-6645475bc5-d9brr" Dec 05 01:14:00 crc kubenswrapper[4990]: I1205 01:14:00.034589 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03e038f5-34d9-4a34-8317-47ebc0e35b66-config\") pod \"controller-manager-6645475bc5-d9brr\" (UID: \"03e038f5-34d9-4a34-8317-47ebc0e35b66\") " pod="openshift-controller-manager/controller-manager-6645475bc5-d9brr" Dec 05 01:14:00 crc kubenswrapper[4990]: I1205 01:14:00.034661 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c641f7a7-144a-48f9-a955-6b87f29bd91b-config\") pod \"route-controller-manager-7cfdb88d88-f2b56\" (UID: \"c641f7a7-144a-48f9-a955-6b87f29bd91b\") " pod="openshift-route-controller-manager/route-controller-manager-7cfdb88d88-f2b56" Dec 05 01:14:00 crc kubenswrapper[4990]: I1205 01:14:00.034707 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq6ht\" (UniqueName: \"kubernetes.io/projected/03e038f5-34d9-4a34-8317-47ebc0e35b66-kube-api-access-nq6ht\") pod \"controller-manager-6645475bc5-d9brr\" (UID: \"03e038f5-34d9-4a34-8317-47ebc0e35b66\") " pod="openshift-controller-manager/controller-manager-6645475bc5-d9brr" Dec 05 01:14:00 crc kubenswrapper[4990]: I1205 01:14:00.034901 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c641f7a7-144a-48f9-a955-6b87f29bd91b-client-ca\") pod \"route-controller-manager-7cfdb88d88-f2b56\" (UID: \"c641f7a7-144a-48f9-a955-6b87f29bd91b\") " pod="openshift-route-controller-manager/route-controller-manager-7cfdb88d88-f2b56" Dec 05 01:14:00 crc kubenswrapper[4990]: I1205 01:14:00.136817 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03e038f5-34d9-4a34-8317-47ebc0e35b66-serving-cert\") pod \"controller-manager-6645475bc5-d9brr\" (UID: \"03e038f5-34d9-4a34-8317-47ebc0e35b66\") " pod="openshift-controller-manager/controller-manager-6645475bc5-d9brr" Dec 05 01:14:00 crc kubenswrapper[4990]: I1205 01:14:00.136876 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c641f7a7-144a-48f9-a955-6b87f29bd91b-serving-cert\") pod \"route-controller-manager-7cfdb88d88-f2b56\" (UID: \"c641f7a7-144a-48f9-a955-6b87f29bd91b\") " pod="openshift-route-controller-manager/route-controller-manager-7cfdb88d88-f2b56" Dec 05 01:14:00 crc kubenswrapper[4990]: I1205 01:14:00.136922 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03e038f5-34d9-4a34-8317-47ebc0e35b66-proxy-ca-bundles\") pod \"controller-manager-6645475bc5-d9brr\" (UID: \"03e038f5-34d9-4a34-8317-47ebc0e35b66\") " pod="openshift-controller-manager/controller-manager-6645475bc5-d9brr" Dec 05 01:14:00 crc kubenswrapper[4990]: I1205 01:14:00.136949 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03e038f5-34d9-4a34-8317-47ebc0e35b66-config\") pod \"controller-manager-6645475bc5-d9brr\" (UID: \"03e038f5-34d9-4a34-8317-47ebc0e35b66\") " pod="openshift-controller-manager/controller-manager-6645475bc5-d9brr" Dec 05 01:14:00 crc kubenswrapper[4990]: I1205 01:14:00.137742 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c641f7a7-144a-48f9-a955-6b87f29bd91b-config\") pod \"route-controller-manager-7cfdb88d88-f2b56\" (UID: \"c641f7a7-144a-48f9-a955-6b87f29bd91b\") " pod="openshift-route-controller-manager/route-controller-manager-7cfdb88d88-f2b56" Dec 05 01:14:00 crc kubenswrapper[4990]: I1205 01:14:00.137869 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq6ht\" (UniqueName: \"kubernetes.io/projected/03e038f5-34d9-4a34-8317-47ebc0e35b66-kube-api-access-nq6ht\") pod \"controller-manager-6645475bc5-d9brr\" (UID: \"03e038f5-34d9-4a34-8317-47ebc0e35b66\") " pod="openshift-controller-manager/controller-manager-6645475bc5-d9brr" Dec 05 01:14:00 crc kubenswrapper[4990]: I1205 01:14:00.137983 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c641f7a7-144a-48f9-a955-6b87f29bd91b-client-ca\") pod \"route-controller-manager-7cfdb88d88-f2b56\" (UID: \"c641f7a7-144a-48f9-a955-6b87f29bd91b\") " pod="openshift-route-controller-manager/route-controller-manager-7cfdb88d88-f2b56" Dec 05 01:14:00 crc kubenswrapper[4990]: I1205 01:14:00.138079 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2n2j\" (UniqueName: \"kubernetes.io/projected/c641f7a7-144a-48f9-a955-6b87f29bd91b-kube-api-access-b2n2j\") pod \"route-controller-manager-7cfdb88d88-f2b56\" (UID: \"c641f7a7-144a-48f9-a955-6b87f29bd91b\") " pod="openshift-route-controller-manager/route-controller-manager-7cfdb88d88-f2b56" Dec 05 01:14:00 crc kubenswrapper[4990]: I1205 01:14:00.138202 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03e038f5-34d9-4a34-8317-47ebc0e35b66-client-ca\") pod \"controller-manager-6645475bc5-d9brr\" (UID: \"03e038f5-34d9-4a34-8317-47ebc0e35b66\") " pod="openshift-controller-manager/controller-manager-6645475bc5-d9brr" Dec 05 01:14:00 crc kubenswrapper[4990]: I1205 01:14:00.138851 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03e038f5-34d9-4a34-8317-47ebc0e35b66-config\") pod \"controller-manager-6645475bc5-d9brr\" (UID: \"03e038f5-34d9-4a34-8317-47ebc0e35b66\") " pod="openshift-controller-manager/controller-manager-6645475bc5-d9brr" Dec 05 01:14:00 crc kubenswrapper[4990]: I1205 01:14:00.139272 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03e038f5-34d9-4a34-8317-47ebc0e35b66-proxy-ca-bundles\") pod \"controller-manager-6645475bc5-d9brr\" (UID: \"03e038f5-34d9-4a34-8317-47ebc0e35b66\") " pod="openshift-controller-manager/controller-manager-6645475bc5-d9brr" Dec 05 01:14:00 crc kubenswrapper[4990]: I1205 01:14:00.139748 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c641f7a7-144a-48f9-a955-6b87f29bd91b-config\") pod \"route-controller-manager-7cfdb88d88-f2b56\" (UID: \"c641f7a7-144a-48f9-a955-6b87f29bd91b\") " pod="openshift-route-controller-manager/route-controller-manager-7cfdb88d88-f2b56" Dec 05 01:14:00 crc kubenswrapper[4990]: I1205 01:14:00.139847 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c641f7a7-144a-48f9-a955-6b87f29bd91b-client-ca\") pod \"route-controller-manager-7cfdb88d88-f2b56\" (UID: \"c641f7a7-144a-48f9-a955-6b87f29bd91b\") " pod="openshift-route-controller-manager/route-controller-manager-7cfdb88d88-f2b56" Dec 05 01:14:00 crc kubenswrapper[4990]: I1205 01:14:00.139895 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03e038f5-34d9-4a34-8317-47ebc0e35b66-client-ca\") pod \"controller-manager-6645475bc5-d9brr\" (UID: \"03e038f5-34d9-4a34-8317-47ebc0e35b66\") " pod="openshift-controller-manager/controller-manager-6645475bc5-d9brr" Dec 05 01:14:00 crc kubenswrapper[4990]: I1205 01:14:00.144215 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c641f7a7-144a-48f9-a955-6b87f29bd91b-serving-cert\") pod \"route-controller-manager-7cfdb88d88-f2b56\" (UID: \"c641f7a7-144a-48f9-a955-6b87f29bd91b\") " pod="openshift-route-controller-manager/route-controller-manager-7cfdb88d88-f2b56" Dec 05 01:14:00 crc kubenswrapper[4990]: I1205 01:14:00.144972 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03e038f5-34d9-4a34-8317-47ebc0e35b66-serving-cert\") pod \"controller-manager-6645475bc5-d9brr\" (UID: \"03e038f5-34d9-4a34-8317-47ebc0e35b66\") " pod="openshift-controller-manager/controller-manager-6645475bc5-d9brr" Dec 05 01:14:00 crc kubenswrapper[4990]: I1205 01:14:00.161301 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq6ht\" (UniqueName: \"kubernetes.io/projected/03e038f5-34d9-4a34-8317-47ebc0e35b66-kube-api-access-nq6ht\") pod \"controller-manager-6645475bc5-d9brr\" (UID: \"03e038f5-34d9-4a34-8317-47ebc0e35b66\") " pod="openshift-controller-manager/controller-manager-6645475bc5-d9brr" Dec 05 01:14:00 crc kubenswrapper[4990]: I1205 01:14:00.170156 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2n2j\" (UniqueName: \"kubernetes.io/projected/c641f7a7-144a-48f9-a955-6b87f29bd91b-kube-api-access-b2n2j\") pod \"route-controller-manager-7cfdb88d88-f2b56\" (UID: \"c641f7a7-144a-48f9-a955-6b87f29bd91b\") " pod="openshift-route-controller-manager/route-controller-manager-7cfdb88d88-f2b56" Dec 05 01:14:00 crc kubenswrapper[4990]: I1205 01:14:00.201144 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cfdb88d88-f2b56" Dec 05 01:14:00 crc kubenswrapper[4990]: I1205 01:14:00.214854 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6645475bc5-d9brr" Dec 05 01:14:00 crc kubenswrapper[4990]: I1205 01:14:00.497268 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6645475bc5-d9brr"] Dec 05 01:14:00 crc kubenswrapper[4990]: W1205 01:14:00.501353 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03e038f5_34d9_4a34_8317_47ebc0e35b66.slice/crio-7551005d5562600f400330d03d113845355d8e4e6a24ec706ee01662d9ce5850 WatchSource:0}: Error finding container 7551005d5562600f400330d03d113845355d8e4e6a24ec706ee01662d9ce5850: Status 404 returned error can't find the container with id 7551005d5562600f400330d03d113845355d8e4e6a24ec706ee01662d9ce5850 Dec 05 01:14:00 crc kubenswrapper[4990]: I1205 01:14:00.664224 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cfdb88d88-f2b56"] Dec 05 01:14:00 crc kubenswrapper[4990]: W1205 01:14:00.669834 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc641f7a7_144a_48f9_a955_6b87f29bd91b.slice/crio-88c279a0835f1720e7a5234f25e466f888c723b51948e866d4e2eb31da7a9a22 WatchSource:0}: Error finding container 88c279a0835f1720e7a5234f25e466f888c723b51948e866d4e2eb31da7a9a22: Status 404 returned error can't find the container with id 88c279a0835f1720e7a5234f25e466f888c723b51948e866d4e2eb31da7a9a22 Dec 05 01:14:01 crc kubenswrapper[4990]: I1205 01:14:01.467819 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6645475bc5-d9brr" event={"ID":"03e038f5-34d9-4a34-8317-47ebc0e35b66","Type":"ContainerStarted","Data":"4b4b881363be78a692a8e08eb704c0ee9ef605a7f09242e23899b760d6db851e"} Dec 05 01:14:01 crc kubenswrapper[4990]: I1205 01:14:01.468125 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6645475bc5-d9brr" Dec 05 01:14:01 crc kubenswrapper[4990]: I1205 01:14:01.468136 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6645475bc5-d9brr" event={"ID":"03e038f5-34d9-4a34-8317-47ebc0e35b66","Type":"ContainerStarted","Data":"7551005d5562600f400330d03d113845355d8e4e6a24ec706ee01662d9ce5850"} Dec 05 01:14:01 crc kubenswrapper[4990]: I1205 01:14:01.469898 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cfdb88d88-f2b56" event={"ID":"c641f7a7-144a-48f9-a955-6b87f29bd91b","Type":"ContainerStarted","Data":"cc1e0460133143e046676c6a5a67b5a6946ea986b0a52786ea336184427fdfdb"} Dec 05 01:14:01 crc kubenswrapper[4990]: I1205 01:14:01.469965 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cfdb88d88-f2b56" event={"ID":"c641f7a7-144a-48f9-a955-6b87f29bd91b","Type":"ContainerStarted","Data":"88c279a0835f1720e7a5234f25e466f888c723b51948e866d4e2eb31da7a9a22"} Dec 05 01:14:01 crc kubenswrapper[4990]: I1205 01:14:01.470196 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7cfdb88d88-f2b56" Dec 05 01:14:01 crc kubenswrapper[4990]: I1205 01:14:01.476649 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7cfdb88d88-f2b56" Dec 05 01:14:01 crc kubenswrapper[4990]: I1205 01:14:01.483887 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6645475bc5-d9brr" Dec 05 01:14:01 crc kubenswrapper[4990]: I1205 01:14:01.493713 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6645475bc5-d9brr" podStartSLOduration=3.493681398 podStartE2EDuration="3.493681398s" podCreationTimestamp="2025-12-05 01:13:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:14:01.486675197 +0000 UTC m=+339.862890558" watchObservedRunningTime="2025-12-05 01:14:01.493681398 +0000 UTC m=+339.869896789" Dec 05 01:14:01 crc kubenswrapper[4990]: I1205 01:14:01.530352 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7cfdb88d88-f2b56" podStartSLOduration=3.5303383889999997 podStartE2EDuration="3.530338389s" podCreationTimestamp="2025-12-05 01:13:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:14:01.52828004 +0000 UTC m=+339.904495401" watchObservedRunningTime="2025-12-05 01:14:01.530338389 +0000 UTC m=+339.906553750" Dec 05 01:14:18 crc kubenswrapper[4990]: I1205 01:14:18.229282 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6645475bc5-d9brr"] Dec 05 01:14:18 crc kubenswrapper[4990]: I1205 01:14:18.230613 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6645475bc5-d9brr" podUID="03e038f5-34d9-4a34-8317-47ebc0e35b66" containerName="controller-manager" containerID="cri-o://4b4b881363be78a692a8e08eb704c0ee9ef605a7f09242e23899b760d6db851e" gracePeriod=30 Dec 05 01:14:18 crc kubenswrapper[4990]: I1205 01:14:18.315967 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cfdb88d88-f2b56"] Dec 05 01:14:18 crc kubenswrapper[4990]: I1205 01:14:18.316198 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7cfdb88d88-f2b56" podUID="c641f7a7-144a-48f9-a955-6b87f29bd91b" containerName="route-controller-manager" containerID="cri-o://cc1e0460133143e046676c6a5a67b5a6946ea986b0a52786ea336184427fdfdb" gracePeriod=30 Dec 05 01:14:18 crc kubenswrapper[4990]: I1205 01:14:18.580813 4990 generic.go:334] "Generic (PLEG): container finished" podID="03e038f5-34d9-4a34-8317-47ebc0e35b66" containerID="4b4b881363be78a692a8e08eb704c0ee9ef605a7f09242e23899b760d6db851e" exitCode=0 Dec 05 01:14:18 crc kubenswrapper[4990]: I1205 01:14:18.580891 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6645475bc5-d9brr" event={"ID":"03e038f5-34d9-4a34-8317-47ebc0e35b66","Type":"ContainerDied","Data":"4b4b881363be78a692a8e08eb704c0ee9ef605a7f09242e23899b760d6db851e"} Dec 05 01:14:18 crc kubenswrapper[4990]: I1205 01:14:18.582820 4990 generic.go:334] "Generic (PLEG): container finished" podID="c641f7a7-144a-48f9-a955-6b87f29bd91b" containerID="cc1e0460133143e046676c6a5a67b5a6946ea986b0a52786ea336184427fdfdb" exitCode=0 Dec 05 01:14:18 crc kubenswrapper[4990]: I1205 01:14:18.582869 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cfdb88d88-f2b56" event={"ID":"c641f7a7-144a-48f9-a955-6b87f29bd91b","Type":"ContainerDied","Data":"cc1e0460133143e046676c6a5a67b5a6946ea986b0a52786ea336184427fdfdb"} Dec 05 01:14:18 crc kubenswrapper[4990]: I1205 01:14:18.740427 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6645475bc5-d9brr" Dec 05 01:14:18 crc kubenswrapper[4990]: I1205 01:14:18.744566 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cfdb88d88-f2b56" Dec 05 01:14:18 crc kubenswrapper[4990]: I1205 01:14:18.900792 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03e038f5-34d9-4a34-8317-47ebc0e35b66-proxy-ca-bundles\") pod \"03e038f5-34d9-4a34-8317-47ebc0e35b66\" (UID: \"03e038f5-34d9-4a34-8317-47ebc0e35b66\") " Dec 05 01:14:18 crc kubenswrapper[4990]: I1205 01:14:18.900839 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c641f7a7-144a-48f9-a955-6b87f29bd91b-serving-cert\") pod \"c641f7a7-144a-48f9-a955-6b87f29bd91b\" (UID: \"c641f7a7-144a-48f9-a955-6b87f29bd91b\") " Dec 05 01:14:18 crc kubenswrapper[4990]: I1205 01:14:18.900867 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03e038f5-34d9-4a34-8317-47ebc0e35b66-client-ca\") pod \"03e038f5-34d9-4a34-8317-47ebc0e35b66\" (UID: \"03e038f5-34d9-4a34-8317-47ebc0e35b66\") " Dec 05 01:14:18 crc kubenswrapper[4990]: I1205 01:14:18.900904 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03e038f5-34d9-4a34-8317-47ebc0e35b66-serving-cert\") pod \"03e038f5-34d9-4a34-8317-47ebc0e35b66\" (UID: \"03e038f5-34d9-4a34-8317-47ebc0e35b66\") " Dec 05 01:14:18 crc kubenswrapper[4990]: I1205 01:14:18.900934 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2n2j\" (UniqueName: \"kubernetes.io/projected/c641f7a7-144a-48f9-a955-6b87f29bd91b-kube-api-access-b2n2j\") pod \"c641f7a7-144a-48f9-a955-6b87f29bd91b\" (UID: \"c641f7a7-144a-48f9-a955-6b87f29bd91b\") " Dec 05 01:14:18 crc kubenswrapper[4990]: I1205 01:14:18.900969 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03e038f5-34d9-4a34-8317-47ebc0e35b66-config\") pod \"03e038f5-34d9-4a34-8317-47ebc0e35b66\" (UID: \"03e038f5-34d9-4a34-8317-47ebc0e35b66\") " Dec 05 01:14:18 crc kubenswrapper[4990]: I1205 01:14:18.901020 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c641f7a7-144a-48f9-a955-6b87f29bd91b-client-ca\") pod \"c641f7a7-144a-48f9-a955-6b87f29bd91b\" (UID: \"c641f7a7-144a-48f9-a955-6b87f29bd91b\") " Dec 05 01:14:18 crc kubenswrapper[4990]: I1205 01:14:18.901045 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq6ht\" (UniqueName: \"kubernetes.io/projected/03e038f5-34d9-4a34-8317-47ebc0e35b66-kube-api-access-nq6ht\") pod \"03e038f5-34d9-4a34-8317-47ebc0e35b66\" (UID: \"03e038f5-34d9-4a34-8317-47ebc0e35b66\") " Dec 05 01:14:18 crc kubenswrapper[4990]: I1205 01:14:18.901066 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c641f7a7-144a-48f9-a955-6b87f29bd91b-config\") pod \"c641f7a7-144a-48f9-a955-6b87f29bd91b\" (UID: \"c641f7a7-144a-48f9-a955-6b87f29bd91b\") " Dec 05 01:14:18 crc kubenswrapper[4990]: I1205 01:14:18.901897 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03e038f5-34d9-4a34-8317-47ebc0e35b66-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "03e038f5-34d9-4a34-8317-47ebc0e35b66" (UID: "03e038f5-34d9-4a34-8317-47ebc0e35b66"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:14:18 crc kubenswrapper[4990]: I1205 01:14:18.901914 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c641f7a7-144a-48f9-a955-6b87f29bd91b-config" (OuterVolumeSpecName: "config") pod "c641f7a7-144a-48f9-a955-6b87f29bd91b" (UID: "c641f7a7-144a-48f9-a955-6b87f29bd91b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:14:18 crc kubenswrapper[4990]: I1205 01:14:18.902025 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c641f7a7-144a-48f9-a955-6b87f29bd91b-client-ca" (OuterVolumeSpecName: "client-ca") pod "c641f7a7-144a-48f9-a955-6b87f29bd91b" (UID: "c641f7a7-144a-48f9-a955-6b87f29bd91b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:14:18 crc kubenswrapper[4990]: I1205 01:14:18.902044 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03e038f5-34d9-4a34-8317-47ebc0e35b66-config" (OuterVolumeSpecName: "config") pod "03e038f5-34d9-4a34-8317-47ebc0e35b66" (UID: "03e038f5-34d9-4a34-8317-47ebc0e35b66"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:14:18 crc kubenswrapper[4990]: I1205 01:14:18.902676 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03e038f5-34d9-4a34-8317-47ebc0e35b66-client-ca" (OuterVolumeSpecName: "client-ca") pod "03e038f5-34d9-4a34-8317-47ebc0e35b66" (UID: "03e038f5-34d9-4a34-8317-47ebc0e35b66"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:14:18 crc kubenswrapper[4990]: I1205 01:14:18.906281 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c641f7a7-144a-48f9-a955-6b87f29bd91b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c641f7a7-144a-48f9-a955-6b87f29bd91b" (UID: "c641f7a7-144a-48f9-a955-6b87f29bd91b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:14:18 crc kubenswrapper[4990]: I1205 01:14:18.907025 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03e038f5-34d9-4a34-8317-47ebc0e35b66-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "03e038f5-34d9-4a34-8317-47ebc0e35b66" (UID: "03e038f5-34d9-4a34-8317-47ebc0e35b66"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:14:18 crc kubenswrapper[4990]: I1205 01:14:18.907050 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03e038f5-34d9-4a34-8317-47ebc0e35b66-kube-api-access-nq6ht" (OuterVolumeSpecName: "kube-api-access-nq6ht") pod "03e038f5-34d9-4a34-8317-47ebc0e35b66" (UID: "03e038f5-34d9-4a34-8317-47ebc0e35b66"). InnerVolumeSpecName "kube-api-access-nq6ht". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:14:18 crc kubenswrapper[4990]: I1205 01:14:18.907813 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c641f7a7-144a-48f9-a955-6b87f29bd91b-kube-api-access-b2n2j" (OuterVolumeSpecName: "kube-api-access-b2n2j") pod "c641f7a7-144a-48f9-a955-6b87f29bd91b" (UID: "c641f7a7-144a-48f9-a955-6b87f29bd91b"). InnerVolumeSpecName "kube-api-access-b2n2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.002257 4990 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03e038f5-34d9-4a34-8317-47ebc0e35b66-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.002330 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2n2j\" (UniqueName: \"kubernetes.io/projected/c641f7a7-144a-48f9-a955-6b87f29bd91b-kube-api-access-b2n2j\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.002347 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03e038f5-34d9-4a34-8317-47ebc0e35b66-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.002361 4990 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c641f7a7-144a-48f9-a955-6b87f29bd91b-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.002373 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq6ht\" (UniqueName: \"kubernetes.io/projected/03e038f5-34d9-4a34-8317-47ebc0e35b66-kube-api-access-nq6ht\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.002384 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c641f7a7-144a-48f9-a955-6b87f29bd91b-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.002395 4990 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03e038f5-34d9-4a34-8317-47ebc0e35b66-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.002406 4990 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c641f7a7-144a-48f9-a955-6b87f29bd91b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.002417 4990 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03e038f5-34d9-4a34-8317-47ebc0e35b66-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.591872 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cfdb88d88-f2b56" event={"ID":"c641f7a7-144a-48f9-a955-6b87f29bd91b","Type":"ContainerDied","Data":"88c279a0835f1720e7a5234f25e466f888c723b51948e866d4e2eb31da7a9a22"} Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.592248 4990 scope.go:117] "RemoveContainer" containerID="cc1e0460133143e046676c6a5a67b5a6946ea986b0a52786ea336184427fdfdb" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.591935 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cfdb88d88-f2b56" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.595416 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6645475bc5-d9brr" event={"ID":"03e038f5-34d9-4a34-8317-47ebc0e35b66","Type":"ContainerDied","Data":"7551005d5562600f400330d03d113845355d8e4e6a24ec706ee01662d9ce5850"} Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.595510 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6645475bc5-d9brr" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.617119 4990 scope.go:117] "RemoveContainer" containerID="4b4b881363be78a692a8e08eb704c0ee9ef605a7f09242e23899b760d6db851e" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.630121 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cfdb88d88-f2b56"] Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.633729 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cfdb88d88-f2b56"] Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.643502 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6645475bc5-d9brr"] Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.646936 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6645475bc5-d9brr"] Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.879445 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-bcc56b59c-t28lp"] Dec 05 01:14:19 crc kubenswrapper[4990]: E1205 01:14:19.879732 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c641f7a7-144a-48f9-a955-6b87f29bd91b" containerName="route-controller-manager" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.879744 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="c641f7a7-144a-48f9-a955-6b87f29bd91b" containerName="route-controller-manager" Dec 05 01:14:19 crc kubenswrapper[4990]: E1205 01:14:19.879755 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03e038f5-34d9-4a34-8317-47ebc0e35b66" containerName="controller-manager" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.879763 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="03e038f5-34d9-4a34-8317-47ebc0e35b66" containerName="controller-manager" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.879856 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="c641f7a7-144a-48f9-a955-6b87f29bd91b" containerName="route-controller-manager" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.879871 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="03e038f5-34d9-4a34-8317-47ebc0e35b66" containerName="controller-manager" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.880279 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bcc56b59c-t28lp" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.883009 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.883682 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.884557 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.884664 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.884787 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.886343 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.889979 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7855f88fcf-dgsq6"] Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.890890 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7855f88fcf-dgsq6" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.894864 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.895029 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.895052 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.895360 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.895609 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.895830 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.901196 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.902265 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bcc56b59c-t28lp"] Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.909720 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7855f88fcf-dgsq6"] Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.923557 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2r5g\" (UniqueName: \"kubernetes.io/projected/4de58ac7-3f5f-430b-ac5f-99d7d62a02b2-kube-api-access-w2r5g\") pod \"controller-manager-bcc56b59c-t28lp\" (UID: \"4de58ac7-3f5f-430b-ac5f-99d7d62a02b2\") " pod="openshift-controller-manager/controller-manager-bcc56b59c-t28lp" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.923647 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4de58ac7-3f5f-430b-ac5f-99d7d62a02b2-client-ca\") pod \"controller-manager-bcc56b59c-t28lp\" (UID: \"4de58ac7-3f5f-430b-ac5f-99d7d62a02b2\") " pod="openshift-controller-manager/controller-manager-bcc56b59c-t28lp" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.923696 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4de58ac7-3f5f-430b-ac5f-99d7d62a02b2-config\") pod \"controller-manager-bcc56b59c-t28lp\" (UID: \"4de58ac7-3f5f-430b-ac5f-99d7d62a02b2\") " pod="openshift-controller-manager/controller-manager-bcc56b59c-t28lp" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.923769 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4de58ac7-3f5f-430b-ac5f-99d7d62a02b2-proxy-ca-bundles\") pod \"controller-manager-bcc56b59c-t28lp\" (UID: \"4de58ac7-3f5f-430b-ac5f-99d7d62a02b2\") " pod="openshift-controller-manager/controller-manager-bcc56b59c-t28lp" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.923887 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4de58ac7-3f5f-430b-ac5f-99d7d62a02b2-serving-cert\") pod \"controller-manager-bcc56b59c-t28lp\" (UID: \"4de58ac7-3f5f-430b-ac5f-99d7d62a02b2\") " pod="openshift-controller-manager/controller-manager-bcc56b59c-t28lp" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.941441 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03e038f5-34d9-4a34-8317-47ebc0e35b66" path="/var/lib/kubelet/pods/03e038f5-34d9-4a34-8317-47ebc0e35b66/volumes" Dec 05 01:14:19 crc kubenswrapper[4990]: I1205 01:14:19.942323 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c641f7a7-144a-48f9-a955-6b87f29bd91b" path="/var/lib/kubelet/pods/c641f7a7-144a-48f9-a955-6b87f29bd91b/volumes" Dec 05 01:14:20 crc kubenswrapper[4990]: I1205 01:14:20.024601 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4de58ac7-3f5f-430b-ac5f-99d7d62a02b2-client-ca\") pod \"controller-manager-bcc56b59c-t28lp\" (UID: \"4de58ac7-3f5f-430b-ac5f-99d7d62a02b2\") " pod="openshift-controller-manager/controller-manager-bcc56b59c-t28lp" Dec 05 01:14:20 crc kubenswrapper[4990]: I1205 01:14:20.024659 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4de58ac7-3f5f-430b-ac5f-99d7d62a02b2-config\") pod \"controller-manager-bcc56b59c-t28lp\" (UID: \"4de58ac7-3f5f-430b-ac5f-99d7d62a02b2\") " pod="openshift-controller-manager/controller-manager-bcc56b59c-t28lp" Dec 05 01:14:20 crc kubenswrapper[4990]: I1205 01:14:20.024716 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8ccb848-07ca-4643-b860-506a75959c7d-client-ca\") pod \"route-controller-manager-7855f88fcf-dgsq6\" (UID: \"e8ccb848-07ca-4643-b860-506a75959c7d\") " pod="openshift-route-controller-manager/route-controller-manager-7855f88fcf-dgsq6" Dec 05 01:14:20 crc kubenswrapper[4990]: I1205 01:14:20.024763 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srwsk\" (UniqueName: \"kubernetes.io/projected/e8ccb848-07ca-4643-b860-506a75959c7d-kube-api-access-srwsk\") pod \"route-controller-manager-7855f88fcf-dgsq6\" (UID: \"e8ccb848-07ca-4643-b860-506a75959c7d\") " pod="openshift-route-controller-manager/route-controller-manager-7855f88fcf-dgsq6" Dec 05 01:14:20 crc kubenswrapper[4990]: I1205 01:14:20.024797 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4de58ac7-3f5f-430b-ac5f-99d7d62a02b2-proxy-ca-bundles\") pod \"controller-manager-bcc56b59c-t28lp\" (UID: \"4de58ac7-3f5f-430b-ac5f-99d7d62a02b2\") " pod="openshift-controller-manager/controller-manager-bcc56b59c-t28lp" Dec 05 01:14:20 crc kubenswrapper[4990]: I1205 01:14:20.024910 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4de58ac7-3f5f-430b-ac5f-99d7d62a02b2-serving-cert\") pod \"controller-manager-bcc56b59c-t28lp\" (UID: \"4de58ac7-3f5f-430b-ac5f-99d7d62a02b2\") " pod="openshift-controller-manager/controller-manager-bcc56b59c-t28lp" Dec 05 01:14:20 crc kubenswrapper[4990]: I1205 01:14:20.025013 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8ccb848-07ca-4643-b860-506a75959c7d-config\") pod \"route-controller-manager-7855f88fcf-dgsq6\" (UID: \"e8ccb848-07ca-4643-b860-506a75959c7d\") " pod="openshift-route-controller-manager/route-controller-manager-7855f88fcf-dgsq6" Dec 05 01:14:20 crc kubenswrapper[4990]: I1205 01:14:20.025113 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8ccb848-07ca-4643-b860-506a75959c7d-serving-cert\") pod \"route-controller-manager-7855f88fcf-dgsq6\" (UID: \"e8ccb848-07ca-4643-b860-506a75959c7d\") " pod="openshift-route-controller-manager/route-controller-manager-7855f88fcf-dgsq6" Dec 05 01:14:20 crc kubenswrapper[4990]: I1205 01:14:20.025182 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2r5g\" (UniqueName: \"kubernetes.io/projected/4de58ac7-3f5f-430b-ac5f-99d7d62a02b2-kube-api-access-w2r5g\") pod \"controller-manager-bcc56b59c-t28lp\" (UID: \"4de58ac7-3f5f-430b-ac5f-99d7d62a02b2\") " pod="openshift-controller-manager/controller-manager-bcc56b59c-t28lp" Dec 05 01:14:20 crc kubenswrapper[4990]: I1205 01:14:20.025728 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4de58ac7-3f5f-430b-ac5f-99d7d62a02b2-client-ca\") pod \"controller-manager-bcc56b59c-t28lp\" (UID: \"4de58ac7-3f5f-430b-ac5f-99d7d62a02b2\") " pod="openshift-controller-manager/controller-manager-bcc56b59c-t28lp" Dec 05 01:14:20 crc kubenswrapper[4990]: I1205 01:14:20.026629 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4de58ac7-3f5f-430b-ac5f-99d7d62a02b2-config\") pod \"controller-manager-bcc56b59c-t28lp\" (UID: \"4de58ac7-3f5f-430b-ac5f-99d7d62a02b2\") " pod="openshift-controller-manager/controller-manager-bcc56b59c-t28lp" Dec 05 01:14:20 crc kubenswrapper[4990]: I1205 01:14:20.027119 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4de58ac7-3f5f-430b-ac5f-99d7d62a02b2-proxy-ca-bundles\") pod \"controller-manager-bcc56b59c-t28lp\" (UID: \"4de58ac7-3f5f-430b-ac5f-99d7d62a02b2\") " pod="openshift-controller-manager/controller-manager-bcc56b59c-t28lp" Dec 05 01:14:20 crc kubenswrapper[4990]: I1205 01:14:20.031218 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4de58ac7-3f5f-430b-ac5f-99d7d62a02b2-serving-cert\") pod \"controller-manager-bcc56b59c-t28lp\" (UID: \"4de58ac7-3f5f-430b-ac5f-99d7d62a02b2\") " pod="openshift-controller-manager/controller-manager-bcc56b59c-t28lp" Dec 05 01:14:20 crc kubenswrapper[4990]: I1205 01:14:20.043988 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2r5g\" (UniqueName: \"kubernetes.io/projected/4de58ac7-3f5f-430b-ac5f-99d7d62a02b2-kube-api-access-w2r5g\") pod \"controller-manager-bcc56b59c-t28lp\" (UID: \"4de58ac7-3f5f-430b-ac5f-99d7d62a02b2\") " pod="openshift-controller-manager/controller-manager-bcc56b59c-t28lp" Dec 05 01:14:20 crc kubenswrapper[4990]: I1205 01:14:20.127444 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8ccb848-07ca-4643-b860-506a75959c7d-client-ca\") pod \"route-controller-manager-7855f88fcf-dgsq6\" (UID: \"e8ccb848-07ca-4643-b860-506a75959c7d\") " pod="openshift-route-controller-manager/route-controller-manager-7855f88fcf-dgsq6" Dec 05 01:14:20 crc kubenswrapper[4990]: I1205 01:14:20.127580 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srwsk\" (UniqueName: \"kubernetes.io/projected/e8ccb848-07ca-4643-b860-506a75959c7d-kube-api-access-srwsk\") pod \"route-controller-manager-7855f88fcf-dgsq6\" (UID: \"e8ccb848-07ca-4643-b860-506a75959c7d\") " pod="openshift-route-controller-manager/route-controller-manager-7855f88fcf-dgsq6" Dec 05 01:14:20 crc kubenswrapper[4990]: I1205 01:14:20.127687 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8ccb848-07ca-4643-b860-506a75959c7d-config\") pod \"route-controller-manager-7855f88fcf-dgsq6\" (UID: \"e8ccb848-07ca-4643-b860-506a75959c7d\") " pod="openshift-route-controller-manager/route-controller-manager-7855f88fcf-dgsq6" Dec 05 01:14:20 crc kubenswrapper[4990]: I1205 01:14:20.127727 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8ccb848-07ca-4643-b860-506a75959c7d-serving-cert\") pod \"route-controller-manager-7855f88fcf-dgsq6\" (UID: \"e8ccb848-07ca-4643-b860-506a75959c7d\") " pod="openshift-route-controller-manager/route-controller-manager-7855f88fcf-dgsq6" Dec 05 01:14:20 crc kubenswrapper[4990]: I1205 01:14:20.128338 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8ccb848-07ca-4643-b860-506a75959c7d-client-ca\") pod \"route-controller-manager-7855f88fcf-dgsq6\" (UID: \"e8ccb848-07ca-4643-b860-506a75959c7d\") " pod="openshift-route-controller-manager/route-controller-manager-7855f88fcf-dgsq6" Dec 05 01:14:20 crc kubenswrapper[4990]: I1205 01:14:20.129812 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8ccb848-07ca-4643-b860-506a75959c7d-config\") pod \"route-controller-manager-7855f88fcf-dgsq6\" (UID: \"e8ccb848-07ca-4643-b860-506a75959c7d\") " pod="openshift-route-controller-manager/route-controller-manager-7855f88fcf-dgsq6" Dec 05 01:14:20 crc kubenswrapper[4990]: I1205 01:14:20.131717 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8ccb848-07ca-4643-b860-506a75959c7d-serving-cert\") pod \"route-controller-manager-7855f88fcf-dgsq6\" (UID: \"e8ccb848-07ca-4643-b860-506a75959c7d\") " pod="openshift-route-controller-manager/route-controller-manager-7855f88fcf-dgsq6" Dec 05 01:14:20 crc kubenswrapper[4990]: I1205 01:14:20.162779 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srwsk\" (UniqueName: \"kubernetes.io/projected/e8ccb848-07ca-4643-b860-506a75959c7d-kube-api-access-srwsk\") pod \"route-controller-manager-7855f88fcf-dgsq6\" (UID: \"e8ccb848-07ca-4643-b860-506a75959c7d\") " pod="openshift-route-controller-manager/route-controller-manager-7855f88fcf-dgsq6" Dec 05 01:14:20 crc kubenswrapper[4990]: I1205 01:14:20.213141 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bcc56b59c-t28lp" Dec 05 01:14:20 crc kubenswrapper[4990]: I1205 01:14:20.227515 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7855f88fcf-dgsq6" Dec 05 01:14:20 crc kubenswrapper[4990]: I1205 01:14:20.553262 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7855f88fcf-dgsq6"] Dec 05 01:14:20 crc kubenswrapper[4990]: W1205 01:14:20.563999 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8ccb848_07ca_4643_b860_506a75959c7d.slice/crio-944b6fa1e4fb015802cd381197eaf7a09668048db3c99eb883993b966f27ef40 WatchSource:0}: Error finding container 944b6fa1e4fb015802cd381197eaf7a09668048db3c99eb883993b966f27ef40: Status 404 returned error can't find the container with id 944b6fa1e4fb015802cd381197eaf7a09668048db3c99eb883993b966f27ef40 Dec 05 01:14:20 crc kubenswrapper[4990]: I1205 01:14:20.604256 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7855f88fcf-dgsq6" event={"ID":"e8ccb848-07ca-4643-b860-506a75959c7d","Type":"ContainerStarted","Data":"944b6fa1e4fb015802cd381197eaf7a09668048db3c99eb883993b966f27ef40"} Dec 05 01:14:20 crc kubenswrapper[4990]: I1205 01:14:20.702695 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bcc56b59c-t28lp"] Dec 05 01:14:20 crc kubenswrapper[4990]: W1205 01:14:20.710545 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4de58ac7_3f5f_430b_ac5f_99d7d62a02b2.slice/crio-32a1ceccfb5367a7709dd3adb458d38d5257ad5c8e832d7bfdc2617ebbec4696 WatchSource:0}: Error finding container 32a1ceccfb5367a7709dd3adb458d38d5257ad5c8e832d7bfdc2617ebbec4696: Status 404 returned error can't find the container with id 32a1ceccfb5367a7709dd3adb458d38d5257ad5c8e832d7bfdc2617ebbec4696 Dec 05 01:14:21 crc kubenswrapper[4990]: I1205 01:14:21.621643 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bcc56b59c-t28lp" event={"ID":"4de58ac7-3f5f-430b-ac5f-99d7d62a02b2","Type":"ContainerStarted","Data":"5b36890f2a828f7f5da5ce4131b0543751c7807b29422d993d74a9556d52eb9b"} Dec 05 01:14:21 crc kubenswrapper[4990]: I1205 01:14:21.622379 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bcc56b59c-t28lp" event={"ID":"4de58ac7-3f5f-430b-ac5f-99d7d62a02b2","Type":"ContainerStarted","Data":"32a1ceccfb5367a7709dd3adb458d38d5257ad5c8e832d7bfdc2617ebbec4696"} Dec 05 01:14:21 crc kubenswrapper[4990]: I1205 01:14:21.622528 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-bcc56b59c-t28lp" Dec 05 01:14:21 crc kubenswrapper[4990]: I1205 01:14:21.627135 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7855f88fcf-dgsq6" event={"ID":"e8ccb848-07ca-4643-b860-506a75959c7d","Type":"ContainerStarted","Data":"d82d1308ef6fb20c997b37f755b8e2b2d5fe40e5ce4f6ddc7a2eee295852dd7c"} Dec 05 01:14:21 crc kubenswrapper[4990]: I1205 01:14:21.627437 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7855f88fcf-dgsq6" Dec 05 01:14:21 crc kubenswrapper[4990]: I1205 01:14:21.628322 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-bcc56b59c-t28lp" Dec 05 01:14:21 crc kubenswrapper[4990]: I1205 01:14:21.634744 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7855f88fcf-dgsq6" Dec 05 01:14:21 crc kubenswrapper[4990]: I1205 01:14:21.652874 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-bcc56b59c-t28lp" podStartSLOduration=3.6528507059999997 podStartE2EDuration="3.652850706s" podCreationTimestamp="2025-12-05 01:14:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:14:21.648495451 +0000 UTC m=+360.024710812" watchObservedRunningTime="2025-12-05 01:14:21.652850706 +0000 UTC m=+360.029066107" Dec 05 01:14:21 crc kubenswrapper[4990]: I1205 01:14:21.685977 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7855f88fcf-dgsq6" podStartSLOduration=3.685958495 podStartE2EDuration="3.685958495s" podCreationTimestamp="2025-12-05 01:14:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:14:21.685800821 +0000 UTC m=+360.062016192" watchObservedRunningTime="2025-12-05 01:14:21.685958495 +0000 UTC m=+360.062173866" Dec 05 01:14:21 crc kubenswrapper[4990]: I1205 01:14:21.824053 4990 patch_prober.go:28] interesting pod/machine-config-daemon-zxlh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 01:14:21 crc kubenswrapper[4990]: I1205 01:14:21.824130 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 01:14:33 crc kubenswrapper[4990]: I1205 01:14:33.940048 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lp5lw"] Dec 05 01:14:38 crc kubenswrapper[4990]: I1205 01:14:38.232438 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bcc56b59c-t28lp"] Dec 05 01:14:38 crc kubenswrapper[4990]: I1205 01:14:38.233109 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-bcc56b59c-t28lp" podUID="4de58ac7-3f5f-430b-ac5f-99d7d62a02b2" containerName="controller-manager" containerID="cri-o://5b36890f2a828f7f5da5ce4131b0543751c7807b29422d993d74a9556d52eb9b" gracePeriod=30 Dec 05 01:14:38 crc kubenswrapper[4990]: I1205 01:14:38.243187 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7855f88fcf-dgsq6"] Dec 05 01:14:38 crc kubenswrapper[4990]: I1205 01:14:38.243549 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7855f88fcf-dgsq6" podUID="e8ccb848-07ca-4643-b860-506a75959c7d" containerName="route-controller-manager" containerID="cri-o://d82d1308ef6fb20c997b37f755b8e2b2d5fe40e5ce4f6ddc7a2eee295852dd7c" gracePeriod=30 Dec 05 01:14:38 crc kubenswrapper[4990]: I1205 01:14:38.714779 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7855f88fcf-dgsq6" Dec 05 01:14:38 crc kubenswrapper[4990]: I1205 01:14:38.723080 4990 generic.go:334] "Generic (PLEG): container finished" podID="4de58ac7-3f5f-430b-ac5f-99d7d62a02b2" containerID="5b36890f2a828f7f5da5ce4131b0543751c7807b29422d993d74a9556d52eb9b" exitCode=0 Dec 05 01:14:38 crc kubenswrapper[4990]: I1205 01:14:38.723159 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bcc56b59c-t28lp" event={"ID":"4de58ac7-3f5f-430b-ac5f-99d7d62a02b2","Type":"ContainerDied","Data":"5b36890f2a828f7f5da5ce4131b0543751c7807b29422d993d74a9556d52eb9b"} Dec 05 01:14:38 crc kubenswrapper[4990]: I1205 01:14:38.724639 4990 generic.go:334] "Generic (PLEG): container finished" podID="e8ccb848-07ca-4643-b860-506a75959c7d" containerID="d82d1308ef6fb20c997b37f755b8e2b2d5fe40e5ce4f6ddc7a2eee295852dd7c" exitCode=0 Dec 05 01:14:38 crc kubenswrapper[4990]: I1205 01:14:38.724684 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7855f88fcf-dgsq6" event={"ID":"e8ccb848-07ca-4643-b860-506a75959c7d","Type":"ContainerDied","Data":"d82d1308ef6fb20c997b37f755b8e2b2d5fe40e5ce4f6ddc7a2eee295852dd7c"} Dec 05 01:14:38 crc kubenswrapper[4990]: I1205 01:14:38.724716 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7855f88fcf-dgsq6" event={"ID":"e8ccb848-07ca-4643-b860-506a75959c7d","Type":"ContainerDied","Data":"944b6fa1e4fb015802cd381197eaf7a09668048db3c99eb883993b966f27ef40"} Dec 05 01:14:38 crc kubenswrapper[4990]: I1205 01:14:38.724654 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7855f88fcf-dgsq6" Dec 05 01:14:38 crc kubenswrapper[4990]: I1205 01:14:38.724758 4990 scope.go:117] "RemoveContainer" containerID="d82d1308ef6fb20c997b37f755b8e2b2d5fe40e5ce4f6ddc7a2eee295852dd7c" Dec 05 01:14:38 crc kubenswrapper[4990]: I1205 01:14:38.743611 4990 scope.go:117] "RemoveContainer" containerID="d82d1308ef6fb20c997b37f755b8e2b2d5fe40e5ce4f6ddc7a2eee295852dd7c" Dec 05 01:14:38 crc kubenswrapper[4990]: E1205 01:14:38.744780 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d82d1308ef6fb20c997b37f755b8e2b2d5fe40e5ce4f6ddc7a2eee295852dd7c\": container with ID starting with d82d1308ef6fb20c997b37f755b8e2b2d5fe40e5ce4f6ddc7a2eee295852dd7c not found: ID does not exist" containerID="d82d1308ef6fb20c997b37f755b8e2b2d5fe40e5ce4f6ddc7a2eee295852dd7c" Dec 05 01:14:38 crc kubenswrapper[4990]: I1205 01:14:38.744827 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d82d1308ef6fb20c997b37f755b8e2b2d5fe40e5ce4f6ddc7a2eee295852dd7c"} err="failed to get container status \"d82d1308ef6fb20c997b37f755b8e2b2d5fe40e5ce4f6ddc7a2eee295852dd7c\": rpc error: code = NotFound desc = could not find container \"d82d1308ef6fb20c997b37f755b8e2b2d5fe40e5ce4f6ddc7a2eee295852dd7c\": container with ID starting with d82d1308ef6fb20c997b37f755b8e2b2d5fe40e5ce4f6ddc7a2eee295852dd7c not found: ID does not exist" Dec 05 01:14:38 crc kubenswrapper[4990]: I1205 01:14:38.791633 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8ccb848-07ca-4643-b860-506a75959c7d-serving-cert\") pod \"e8ccb848-07ca-4643-b860-506a75959c7d\" (UID: \"e8ccb848-07ca-4643-b860-506a75959c7d\") " Dec 05 01:14:38 crc kubenswrapper[4990]: I1205 01:14:38.791676 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srwsk\" (UniqueName: \"kubernetes.io/projected/e8ccb848-07ca-4643-b860-506a75959c7d-kube-api-access-srwsk\") pod \"e8ccb848-07ca-4643-b860-506a75959c7d\" (UID: \"e8ccb848-07ca-4643-b860-506a75959c7d\") " Dec 05 01:14:38 crc kubenswrapper[4990]: I1205 01:14:38.791712 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8ccb848-07ca-4643-b860-506a75959c7d-client-ca\") pod \"e8ccb848-07ca-4643-b860-506a75959c7d\" (UID: \"e8ccb848-07ca-4643-b860-506a75959c7d\") " Dec 05 01:14:38 crc kubenswrapper[4990]: I1205 01:14:38.791743 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8ccb848-07ca-4643-b860-506a75959c7d-config\") pod \"e8ccb848-07ca-4643-b860-506a75959c7d\" (UID: \"e8ccb848-07ca-4643-b860-506a75959c7d\") " Dec 05 01:14:38 crc kubenswrapper[4990]: I1205 01:14:38.792523 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8ccb848-07ca-4643-b860-506a75959c7d-client-ca" (OuterVolumeSpecName: "client-ca") pod "e8ccb848-07ca-4643-b860-506a75959c7d" (UID: "e8ccb848-07ca-4643-b860-506a75959c7d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:14:38 crc kubenswrapper[4990]: I1205 01:14:38.792644 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8ccb848-07ca-4643-b860-506a75959c7d-config" (OuterVolumeSpecName: "config") pod "e8ccb848-07ca-4643-b860-506a75959c7d" (UID: "e8ccb848-07ca-4643-b860-506a75959c7d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:14:38 crc kubenswrapper[4990]: I1205 01:14:38.798613 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8ccb848-07ca-4643-b860-506a75959c7d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e8ccb848-07ca-4643-b860-506a75959c7d" (UID: "e8ccb848-07ca-4643-b860-506a75959c7d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:14:38 crc kubenswrapper[4990]: I1205 01:14:38.799114 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8ccb848-07ca-4643-b860-506a75959c7d-kube-api-access-srwsk" (OuterVolumeSpecName: "kube-api-access-srwsk") pod "e8ccb848-07ca-4643-b860-506a75959c7d" (UID: "e8ccb848-07ca-4643-b860-506a75959c7d"). InnerVolumeSpecName "kube-api-access-srwsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:14:38 crc kubenswrapper[4990]: I1205 01:14:38.892858 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8ccb848-07ca-4643-b860-506a75959c7d-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:38 crc kubenswrapper[4990]: I1205 01:14:38.892911 4990 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8ccb848-07ca-4643-b860-506a75959c7d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:38 crc kubenswrapper[4990]: I1205 01:14:38.892932 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srwsk\" (UniqueName: \"kubernetes.io/projected/e8ccb848-07ca-4643-b860-506a75959c7d-kube-api-access-srwsk\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:38 crc kubenswrapper[4990]: I1205 01:14:38.892959 4990 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8ccb848-07ca-4643-b860-506a75959c7d-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.056145 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7855f88fcf-dgsq6"] Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.061327 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7855f88fcf-dgsq6"] Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.248969 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bcc56b59c-t28lp" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.399151 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4de58ac7-3f5f-430b-ac5f-99d7d62a02b2-proxy-ca-bundles\") pod \"4de58ac7-3f5f-430b-ac5f-99d7d62a02b2\" (UID: \"4de58ac7-3f5f-430b-ac5f-99d7d62a02b2\") " Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.399229 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4de58ac7-3f5f-430b-ac5f-99d7d62a02b2-config\") pod \"4de58ac7-3f5f-430b-ac5f-99d7d62a02b2\" (UID: \"4de58ac7-3f5f-430b-ac5f-99d7d62a02b2\") " Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.399277 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4de58ac7-3f5f-430b-ac5f-99d7d62a02b2-serving-cert\") pod \"4de58ac7-3f5f-430b-ac5f-99d7d62a02b2\" (UID: \"4de58ac7-3f5f-430b-ac5f-99d7d62a02b2\") " Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.399468 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4de58ac7-3f5f-430b-ac5f-99d7d62a02b2-client-ca\") pod \"4de58ac7-3f5f-430b-ac5f-99d7d62a02b2\" (UID: \"4de58ac7-3f5f-430b-ac5f-99d7d62a02b2\") " Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.399976 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4de58ac7-3f5f-430b-ac5f-99d7d62a02b2-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4de58ac7-3f5f-430b-ac5f-99d7d62a02b2" (UID: "4de58ac7-3f5f-430b-ac5f-99d7d62a02b2"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.400183 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4de58ac7-3f5f-430b-ac5f-99d7d62a02b2-client-ca" (OuterVolumeSpecName: "client-ca") pod "4de58ac7-3f5f-430b-ac5f-99d7d62a02b2" (UID: "4de58ac7-3f5f-430b-ac5f-99d7d62a02b2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.400308 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4de58ac7-3f5f-430b-ac5f-99d7d62a02b2-config" (OuterVolumeSpecName: "config") pod "4de58ac7-3f5f-430b-ac5f-99d7d62a02b2" (UID: "4de58ac7-3f5f-430b-ac5f-99d7d62a02b2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.400399 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2r5g\" (UniqueName: \"kubernetes.io/projected/4de58ac7-3f5f-430b-ac5f-99d7d62a02b2-kube-api-access-w2r5g\") pod \"4de58ac7-3f5f-430b-ac5f-99d7d62a02b2\" (UID: \"4de58ac7-3f5f-430b-ac5f-99d7d62a02b2\") " Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.400909 4990 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4de58ac7-3f5f-430b-ac5f-99d7d62a02b2-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.400972 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4de58ac7-3f5f-430b-ac5f-99d7d62a02b2-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.400987 4990 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4de58ac7-3f5f-430b-ac5f-99d7d62a02b2-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.403065 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4de58ac7-3f5f-430b-ac5f-99d7d62a02b2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4de58ac7-3f5f-430b-ac5f-99d7d62a02b2" (UID: "4de58ac7-3f5f-430b-ac5f-99d7d62a02b2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.403116 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4de58ac7-3f5f-430b-ac5f-99d7d62a02b2-kube-api-access-w2r5g" (OuterVolumeSpecName: "kube-api-access-w2r5g") pod "4de58ac7-3f5f-430b-ac5f-99d7d62a02b2" (UID: "4de58ac7-3f5f-430b-ac5f-99d7d62a02b2"). InnerVolumeSpecName "kube-api-access-w2r5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.502578 4990 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4de58ac7-3f5f-430b-ac5f-99d7d62a02b2-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.502605 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2r5g\" (UniqueName: \"kubernetes.io/projected/4de58ac7-3f5f-430b-ac5f-99d7d62a02b2-kube-api-access-w2r5g\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.733221 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bcc56b59c-t28lp" event={"ID":"4de58ac7-3f5f-430b-ac5f-99d7d62a02b2","Type":"ContainerDied","Data":"32a1ceccfb5367a7709dd3adb458d38d5257ad5c8e832d7bfdc2617ebbec4696"} Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.733276 4990 scope.go:117] "RemoveContainer" containerID="5b36890f2a828f7f5da5ce4131b0543751c7807b29422d993d74a9556d52eb9b" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.733290 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bcc56b59c-t28lp" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.777630 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bcc56b59c-t28lp"] Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.785522 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-bcc56b59c-t28lp"] Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.897655 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67dbbb759-xwqxn"] Dec 05 01:14:39 crc kubenswrapper[4990]: E1205 01:14:39.898381 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8ccb848-07ca-4643-b860-506a75959c7d" containerName="route-controller-manager" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.898405 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8ccb848-07ca-4643-b860-506a75959c7d" containerName="route-controller-manager" Dec 05 01:14:39 crc kubenswrapper[4990]: E1205 01:14:39.898429 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4de58ac7-3f5f-430b-ac5f-99d7d62a02b2" containerName="controller-manager" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.898443 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="4de58ac7-3f5f-430b-ac5f-99d7d62a02b2" containerName="controller-manager" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.898700 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="4de58ac7-3f5f-430b-ac5f-99d7d62a02b2" containerName="controller-manager" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.898721 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8ccb848-07ca-4643-b860-506a75959c7d" containerName="route-controller-manager" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.903001 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67dbbb759-xwqxn" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.903504 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-758cb8b6df-2nvgz"] Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.904620 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.904836 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.905307 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-758cb8b6df-2nvgz" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.905774 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.905781 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.905895 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.908755 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnrhj\" (UniqueName: \"kubernetes.io/projected/916299e4-5311-4f1e-b3d2-5debf1583d1d-kube-api-access-vnrhj\") pod \"route-controller-manager-67dbbb759-xwqxn\" (UID: \"916299e4-5311-4f1e-b3d2-5debf1583d1d\") " pod="openshift-route-controller-manager/route-controller-manager-67dbbb759-xwqxn" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.908797 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9d3b023-842c-4b26-894f-36a286358af4-proxy-ca-bundles\") pod \"controller-manager-758cb8b6df-2nvgz\" (UID: \"d9d3b023-842c-4b26-894f-36a286358af4\") " pod="openshift-controller-manager/controller-manager-758cb8b6df-2nvgz" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.908834 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9d3b023-842c-4b26-894f-36a286358af4-serving-cert\") pod \"controller-manager-758cb8b6df-2nvgz\" (UID: \"d9d3b023-842c-4b26-894f-36a286358af4\") " pod="openshift-controller-manager/controller-manager-758cb8b6df-2nvgz" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.908860 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/916299e4-5311-4f1e-b3d2-5debf1583d1d-client-ca\") pod \"route-controller-manager-67dbbb759-xwqxn\" (UID: \"916299e4-5311-4f1e-b3d2-5debf1583d1d\") " pod="openshift-route-controller-manager/route-controller-manager-67dbbb759-xwqxn" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.908888 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9d3b023-842c-4b26-894f-36a286358af4-client-ca\") pod \"controller-manager-758cb8b6df-2nvgz\" (UID: \"d9d3b023-842c-4b26-894f-36a286358af4\") " pod="openshift-controller-manager/controller-manager-758cb8b6df-2nvgz" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.908928 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/916299e4-5311-4f1e-b3d2-5debf1583d1d-serving-cert\") pod \"route-controller-manager-67dbbb759-xwqxn\" (UID: \"916299e4-5311-4f1e-b3d2-5debf1583d1d\") " pod="openshift-route-controller-manager/route-controller-manager-67dbbb759-xwqxn" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.908951 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9d3b023-842c-4b26-894f-36a286358af4-config\") pod \"controller-manager-758cb8b6df-2nvgz\" (UID: \"d9d3b023-842c-4b26-894f-36a286358af4\") " pod="openshift-controller-manager/controller-manager-758cb8b6df-2nvgz" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.908987 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhm98\" (UniqueName: \"kubernetes.io/projected/d9d3b023-842c-4b26-894f-36a286358af4-kube-api-access-rhm98\") pod \"controller-manager-758cb8b6df-2nvgz\" (UID: \"d9d3b023-842c-4b26-894f-36a286358af4\") " pod="openshift-controller-manager/controller-manager-758cb8b6df-2nvgz" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.909015 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/916299e4-5311-4f1e-b3d2-5debf1583d1d-config\") pod \"route-controller-manager-67dbbb759-xwqxn\" (UID: \"916299e4-5311-4f1e-b3d2-5debf1583d1d\") " pod="openshift-route-controller-manager/route-controller-manager-67dbbb759-xwqxn" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.909535 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.910688 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.910969 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.911203 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.911257 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.911299 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.912581 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.916646 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-758cb8b6df-2nvgz"] Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.919090 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.923209 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67dbbb759-xwqxn"] Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.950099 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4de58ac7-3f5f-430b-ac5f-99d7d62a02b2" path="/var/lib/kubelet/pods/4de58ac7-3f5f-430b-ac5f-99d7d62a02b2/volumes" Dec 05 01:14:39 crc kubenswrapper[4990]: I1205 01:14:39.951098 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8ccb848-07ca-4643-b860-506a75959c7d" path="/var/lib/kubelet/pods/e8ccb848-07ca-4643-b860-506a75959c7d/volumes" Dec 05 01:14:40 crc kubenswrapper[4990]: I1205 01:14:40.009838 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhm98\" (UniqueName: \"kubernetes.io/projected/d9d3b023-842c-4b26-894f-36a286358af4-kube-api-access-rhm98\") pod \"controller-manager-758cb8b6df-2nvgz\" (UID: \"d9d3b023-842c-4b26-894f-36a286358af4\") " pod="openshift-controller-manager/controller-manager-758cb8b6df-2nvgz" Dec 05 01:14:40 crc kubenswrapper[4990]: I1205 01:14:40.009891 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/916299e4-5311-4f1e-b3d2-5debf1583d1d-config\") pod \"route-controller-manager-67dbbb759-xwqxn\" (UID: \"916299e4-5311-4f1e-b3d2-5debf1583d1d\") " pod="openshift-route-controller-manager/route-controller-manager-67dbbb759-xwqxn" Dec 05 01:14:40 crc kubenswrapper[4990]: I1205 01:14:40.009978 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnrhj\" (UniqueName: \"kubernetes.io/projected/916299e4-5311-4f1e-b3d2-5debf1583d1d-kube-api-access-vnrhj\") pod \"route-controller-manager-67dbbb759-xwqxn\" (UID: \"916299e4-5311-4f1e-b3d2-5debf1583d1d\") " pod="openshift-route-controller-manager/route-controller-manager-67dbbb759-xwqxn" Dec 05 01:14:40 crc kubenswrapper[4990]: I1205 01:14:40.009999 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9d3b023-842c-4b26-894f-36a286358af4-proxy-ca-bundles\") pod \"controller-manager-758cb8b6df-2nvgz\" (UID: \"d9d3b023-842c-4b26-894f-36a286358af4\") " pod="openshift-controller-manager/controller-manager-758cb8b6df-2nvgz" Dec 05 01:14:40 crc kubenswrapper[4990]: I1205 01:14:40.010054 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9d3b023-842c-4b26-894f-36a286358af4-serving-cert\") pod \"controller-manager-758cb8b6df-2nvgz\" (UID: \"d9d3b023-842c-4b26-894f-36a286358af4\") " pod="openshift-controller-manager/controller-manager-758cb8b6df-2nvgz" Dec 05 01:14:40 crc kubenswrapper[4990]: I1205 01:14:40.010084 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/916299e4-5311-4f1e-b3d2-5debf1583d1d-client-ca\") pod \"route-controller-manager-67dbbb759-xwqxn\" (UID: \"916299e4-5311-4f1e-b3d2-5debf1583d1d\") " pod="openshift-route-controller-manager/route-controller-manager-67dbbb759-xwqxn" Dec 05 01:14:40 crc kubenswrapper[4990]: I1205 01:14:40.010121 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9d3b023-842c-4b26-894f-36a286358af4-client-ca\") pod \"controller-manager-758cb8b6df-2nvgz\" (UID: \"d9d3b023-842c-4b26-894f-36a286358af4\") " pod="openshift-controller-manager/controller-manager-758cb8b6df-2nvgz" Dec 05 01:14:40 crc kubenswrapper[4990]: I1205 01:14:40.010363 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/916299e4-5311-4f1e-b3d2-5debf1583d1d-serving-cert\") pod \"route-controller-manager-67dbbb759-xwqxn\" (UID: \"916299e4-5311-4f1e-b3d2-5debf1583d1d\") " pod="openshift-route-controller-manager/route-controller-manager-67dbbb759-xwqxn" Dec 05 01:14:40 crc kubenswrapper[4990]: I1205 01:14:40.010395 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9d3b023-842c-4b26-894f-36a286358af4-config\") pod \"controller-manager-758cb8b6df-2nvgz\" (UID: \"d9d3b023-842c-4b26-894f-36a286358af4\") " pod="openshift-controller-manager/controller-manager-758cb8b6df-2nvgz" Dec 05 01:14:40 crc kubenswrapper[4990]: I1205 01:14:40.011360 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/916299e4-5311-4f1e-b3d2-5debf1583d1d-config\") pod \"route-controller-manager-67dbbb759-xwqxn\" (UID: \"916299e4-5311-4f1e-b3d2-5debf1583d1d\") " pod="openshift-route-controller-manager/route-controller-manager-67dbbb759-xwqxn" Dec 05 01:14:40 crc kubenswrapper[4990]: I1205 01:14:40.011561 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/916299e4-5311-4f1e-b3d2-5debf1583d1d-client-ca\") pod \"route-controller-manager-67dbbb759-xwqxn\" (UID: \"916299e4-5311-4f1e-b3d2-5debf1583d1d\") " pod="openshift-route-controller-manager/route-controller-manager-67dbbb759-xwqxn" Dec 05 01:14:40 crc kubenswrapper[4990]: I1205 01:14:40.011921 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9d3b023-842c-4b26-894f-36a286358af4-client-ca\") pod \"controller-manager-758cb8b6df-2nvgz\" (UID: \"d9d3b023-842c-4b26-894f-36a286358af4\") " pod="openshift-controller-manager/controller-manager-758cb8b6df-2nvgz" Dec 05 01:14:40 crc kubenswrapper[4990]: I1205 01:14:40.011952 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9d3b023-842c-4b26-894f-36a286358af4-proxy-ca-bundles\") pod \"controller-manager-758cb8b6df-2nvgz\" (UID: \"d9d3b023-842c-4b26-894f-36a286358af4\") " pod="openshift-controller-manager/controller-manager-758cb8b6df-2nvgz" Dec 05 01:14:40 crc kubenswrapper[4990]: I1205 01:14:40.012541 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9d3b023-842c-4b26-894f-36a286358af4-config\") pod \"controller-manager-758cb8b6df-2nvgz\" (UID: \"d9d3b023-842c-4b26-894f-36a286358af4\") " pod="openshift-controller-manager/controller-manager-758cb8b6df-2nvgz" Dec 05 01:14:40 crc kubenswrapper[4990]: I1205 01:14:40.017458 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/916299e4-5311-4f1e-b3d2-5debf1583d1d-serving-cert\") pod \"route-controller-manager-67dbbb759-xwqxn\" (UID: \"916299e4-5311-4f1e-b3d2-5debf1583d1d\") " pod="openshift-route-controller-manager/route-controller-manager-67dbbb759-xwqxn" Dec 05 01:14:40 crc kubenswrapper[4990]: I1205 01:14:40.024693 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9d3b023-842c-4b26-894f-36a286358af4-serving-cert\") pod \"controller-manager-758cb8b6df-2nvgz\" (UID: \"d9d3b023-842c-4b26-894f-36a286358af4\") " pod="openshift-controller-manager/controller-manager-758cb8b6df-2nvgz" Dec 05 01:14:40 crc kubenswrapper[4990]: I1205 01:14:40.025150 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnrhj\" (UniqueName: \"kubernetes.io/projected/916299e4-5311-4f1e-b3d2-5debf1583d1d-kube-api-access-vnrhj\") pod \"route-controller-manager-67dbbb759-xwqxn\" (UID: \"916299e4-5311-4f1e-b3d2-5debf1583d1d\") " pod="openshift-route-controller-manager/route-controller-manager-67dbbb759-xwqxn" Dec 05 01:14:40 crc kubenswrapper[4990]: I1205 01:14:40.025721 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhm98\" (UniqueName: \"kubernetes.io/projected/d9d3b023-842c-4b26-894f-36a286358af4-kube-api-access-rhm98\") pod \"controller-manager-758cb8b6df-2nvgz\" (UID: \"d9d3b023-842c-4b26-894f-36a286358af4\") " pod="openshift-controller-manager/controller-manager-758cb8b6df-2nvgz" Dec 05 01:14:40 crc kubenswrapper[4990]: I1205 01:14:40.230588 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67dbbb759-xwqxn" Dec 05 01:14:40 crc kubenswrapper[4990]: I1205 01:14:40.244366 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-758cb8b6df-2nvgz" Dec 05 01:14:40 crc kubenswrapper[4990]: I1205 01:14:40.649201 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67dbbb759-xwqxn"] Dec 05 01:14:40 crc kubenswrapper[4990]: I1205 01:14:40.732277 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-758cb8b6df-2nvgz"] Dec 05 01:14:40 crc kubenswrapper[4990]: W1205 01:14:40.735256 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9d3b023_842c_4b26_894f_36a286358af4.slice/crio-d8ce581d65944f4adfe34ce42b619ffa965eccd0579882a89ca16328cd693335 WatchSource:0}: Error finding container d8ce581d65944f4adfe34ce42b619ffa965eccd0579882a89ca16328cd693335: Status 404 returned error can't find the container with id d8ce581d65944f4adfe34ce42b619ffa965eccd0579882a89ca16328cd693335 Dec 05 01:14:40 crc kubenswrapper[4990]: I1205 01:14:40.744176 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67dbbb759-xwqxn" event={"ID":"916299e4-5311-4f1e-b3d2-5debf1583d1d","Type":"ContainerStarted","Data":"3ce4b860bceee19868064cf8e9bd5235d0f0c8b29c18219560c9fbba98f04a14"} Dec 05 01:14:41 crc kubenswrapper[4990]: I1205 01:14:41.751113 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-758cb8b6df-2nvgz" event={"ID":"d9d3b023-842c-4b26-894f-36a286358af4","Type":"ContainerStarted","Data":"1c15047ad12ba0b2ccaacb46abc2b3a48ff86c18687fd6bcff544b67399c1f58"} Dec 05 01:14:41 crc kubenswrapper[4990]: I1205 01:14:41.751526 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-758cb8b6df-2nvgz" Dec 05 01:14:41 crc kubenswrapper[4990]: I1205 01:14:41.751541 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-758cb8b6df-2nvgz" event={"ID":"d9d3b023-842c-4b26-894f-36a286358af4","Type":"ContainerStarted","Data":"d8ce581d65944f4adfe34ce42b619ffa965eccd0579882a89ca16328cd693335"} Dec 05 01:14:41 crc kubenswrapper[4990]: I1205 01:14:41.752291 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67dbbb759-xwqxn" event={"ID":"916299e4-5311-4f1e-b3d2-5debf1583d1d","Type":"ContainerStarted","Data":"9f40912ac8acbc7d6073eec95c9ada9c3e111440563ccc8ddc156e4a1b88b297"} Dec 05 01:14:41 crc kubenswrapper[4990]: I1205 01:14:41.752513 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-67dbbb759-xwqxn" Dec 05 01:14:41 crc kubenswrapper[4990]: I1205 01:14:41.757966 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-758cb8b6df-2nvgz" Dec 05 01:14:41 crc kubenswrapper[4990]: I1205 01:14:41.758189 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-67dbbb759-xwqxn" Dec 05 01:14:41 crc kubenswrapper[4990]: I1205 01:14:41.770701 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-758cb8b6df-2nvgz" podStartSLOduration=3.770682537 podStartE2EDuration="3.770682537s" podCreationTimestamp="2025-12-05 01:14:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:14:41.768057732 +0000 UTC m=+380.144273103" watchObservedRunningTime="2025-12-05 01:14:41.770682537 +0000 UTC m=+380.146897898" Dec 05 01:14:41 crc kubenswrapper[4990]: I1205 01:14:41.791483 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-67dbbb759-xwqxn" podStartSLOduration=3.7914447129999997 podStartE2EDuration="3.791444713s" podCreationTimestamp="2025-12-05 01:14:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:14:41.787236952 +0000 UTC m=+380.163452333" watchObservedRunningTime="2025-12-05 01:14:41.791444713 +0000 UTC m=+380.167660084" Dec 05 01:14:51 crc kubenswrapper[4990]: I1205 01:14:51.824269 4990 patch_prober.go:28] interesting pod/machine-config-daemon-zxlh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 01:14:51 crc kubenswrapper[4990]: I1205 01:14:51.824703 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.229055 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-758cb8b6df-2nvgz"] Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.231420 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-758cb8b6df-2nvgz" podUID="d9d3b023-842c-4b26-894f-36a286358af4" containerName="controller-manager" containerID="cri-o://1c15047ad12ba0b2ccaacb46abc2b3a48ff86c18687fd6bcff544b67399c1f58" gracePeriod=30 Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.321030 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67dbbb759-xwqxn"] Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.321261 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-67dbbb759-xwqxn" podUID="916299e4-5311-4f1e-b3d2-5debf1583d1d" containerName="route-controller-manager" containerID="cri-o://9f40912ac8acbc7d6073eec95c9ada9c3e111440563ccc8ddc156e4a1b88b297" gracePeriod=30 Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.774305 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67dbbb759-xwqxn" Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.810181 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-758cb8b6df-2nvgz" Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.867200 4990 generic.go:334] "Generic (PLEG): container finished" podID="d9d3b023-842c-4b26-894f-36a286358af4" containerID="1c15047ad12ba0b2ccaacb46abc2b3a48ff86c18687fd6bcff544b67399c1f58" exitCode=0 Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.867246 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-758cb8b6df-2nvgz" event={"ID":"d9d3b023-842c-4b26-894f-36a286358af4","Type":"ContainerDied","Data":"1c15047ad12ba0b2ccaacb46abc2b3a48ff86c18687fd6bcff544b67399c1f58"} Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.867311 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-758cb8b6df-2nvgz" event={"ID":"d9d3b023-842c-4b26-894f-36a286358af4","Type":"ContainerDied","Data":"d8ce581d65944f4adfe34ce42b619ffa965eccd0579882a89ca16328cd693335"} Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.867335 4990 scope.go:117] "RemoveContainer" containerID="1c15047ad12ba0b2ccaacb46abc2b3a48ff86c18687fd6bcff544b67399c1f58" Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.867676 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-758cb8b6df-2nvgz" Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.868943 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67dbbb759-xwqxn" Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.869000 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67dbbb759-xwqxn" event={"ID":"916299e4-5311-4f1e-b3d2-5debf1583d1d","Type":"ContainerDied","Data":"9f40912ac8acbc7d6073eec95c9ada9c3e111440563ccc8ddc156e4a1b88b297"} Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.868930 4990 generic.go:334] "Generic (PLEG): container finished" podID="916299e4-5311-4f1e-b3d2-5debf1583d1d" containerID="9f40912ac8acbc7d6073eec95c9ada9c3e111440563ccc8ddc156e4a1b88b297" exitCode=0 Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.869102 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67dbbb759-xwqxn" event={"ID":"916299e4-5311-4f1e-b3d2-5debf1583d1d","Type":"ContainerDied","Data":"3ce4b860bceee19868064cf8e9bd5235d0f0c8b29c18219560c9fbba98f04a14"} Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.874113 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/916299e4-5311-4f1e-b3d2-5debf1583d1d-serving-cert\") pod \"916299e4-5311-4f1e-b3d2-5debf1583d1d\" (UID: \"916299e4-5311-4f1e-b3d2-5debf1583d1d\") " Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.874160 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/916299e4-5311-4f1e-b3d2-5debf1583d1d-config\") pod \"916299e4-5311-4f1e-b3d2-5debf1583d1d\" (UID: \"916299e4-5311-4f1e-b3d2-5debf1583d1d\") " Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.874182 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/916299e4-5311-4f1e-b3d2-5debf1583d1d-client-ca\") pod \"916299e4-5311-4f1e-b3d2-5debf1583d1d\" (UID: \"916299e4-5311-4f1e-b3d2-5debf1583d1d\") " Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.874206 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnrhj\" (UniqueName: \"kubernetes.io/projected/916299e4-5311-4f1e-b3d2-5debf1583d1d-kube-api-access-vnrhj\") pod \"916299e4-5311-4f1e-b3d2-5debf1583d1d\" (UID: \"916299e4-5311-4f1e-b3d2-5debf1583d1d\") " Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.874242 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9d3b023-842c-4b26-894f-36a286358af4-proxy-ca-bundles\") pod \"d9d3b023-842c-4b26-894f-36a286358af4\" (UID: \"d9d3b023-842c-4b26-894f-36a286358af4\") " Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.874271 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9d3b023-842c-4b26-894f-36a286358af4-client-ca\") pod \"d9d3b023-842c-4b26-894f-36a286358af4\" (UID: \"d9d3b023-842c-4b26-894f-36a286358af4\") " Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.874298 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9d3b023-842c-4b26-894f-36a286358af4-config\") pod \"d9d3b023-842c-4b26-894f-36a286358af4\" (UID: \"d9d3b023-842c-4b26-894f-36a286358af4\") " Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.874357 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhm98\" (UniqueName: \"kubernetes.io/projected/d9d3b023-842c-4b26-894f-36a286358af4-kube-api-access-rhm98\") pod \"d9d3b023-842c-4b26-894f-36a286358af4\" (UID: \"d9d3b023-842c-4b26-894f-36a286358af4\") " Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.875113 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9d3b023-842c-4b26-894f-36a286358af4-client-ca" (OuterVolumeSpecName: "client-ca") pod "d9d3b023-842c-4b26-894f-36a286358af4" (UID: "d9d3b023-842c-4b26-894f-36a286358af4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.875155 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9d3b023-842c-4b26-894f-36a286358af4-config" (OuterVolumeSpecName: "config") pod "d9d3b023-842c-4b26-894f-36a286358af4" (UID: "d9d3b023-842c-4b26-894f-36a286358af4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.875582 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9d3b023-842c-4b26-894f-36a286358af4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d9d3b023-842c-4b26-894f-36a286358af4" (UID: "d9d3b023-842c-4b26-894f-36a286358af4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.875730 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/916299e4-5311-4f1e-b3d2-5debf1583d1d-client-ca" (OuterVolumeSpecName: "client-ca") pod "916299e4-5311-4f1e-b3d2-5debf1583d1d" (UID: "916299e4-5311-4f1e-b3d2-5debf1583d1d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.876439 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/916299e4-5311-4f1e-b3d2-5debf1583d1d-config" (OuterVolumeSpecName: "config") pod "916299e4-5311-4f1e-b3d2-5debf1583d1d" (UID: "916299e4-5311-4f1e-b3d2-5debf1583d1d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.879755 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/916299e4-5311-4f1e-b3d2-5debf1583d1d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "916299e4-5311-4f1e-b3d2-5debf1583d1d" (UID: "916299e4-5311-4f1e-b3d2-5debf1583d1d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.879837 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9d3b023-842c-4b26-894f-36a286358af4-kube-api-access-rhm98" (OuterVolumeSpecName: "kube-api-access-rhm98") pod "d9d3b023-842c-4b26-894f-36a286358af4" (UID: "d9d3b023-842c-4b26-894f-36a286358af4"). InnerVolumeSpecName "kube-api-access-rhm98". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.879984 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/916299e4-5311-4f1e-b3d2-5debf1583d1d-kube-api-access-vnrhj" (OuterVolumeSpecName: "kube-api-access-vnrhj") pod "916299e4-5311-4f1e-b3d2-5debf1583d1d" (UID: "916299e4-5311-4f1e-b3d2-5debf1583d1d"). InnerVolumeSpecName "kube-api-access-vnrhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.885404 4990 scope.go:117] "RemoveContainer" containerID="1c15047ad12ba0b2ccaacb46abc2b3a48ff86c18687fd6bcff544b67399c1f58" Dec 05 01:14:58 crc kubenswrapper[4990]: E1205 01:14:58.885919 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c15047ad12ba0b2ccaacb46abc2b3a48ff86c18687fd6bcff544b67399c1f58\": container with ID starting with 1c15047ad12ba0b2ccaacb46abc2b3a48ff86c18687fd6bcff544b67399c1f58 not found: ID does not exist" containerID="1c15047ad12ba0b2ccaacb46abc2b3a48ff86c18687fd6bcff544b67399c1f58" Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.885979 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c15047ad12ba0b2ccaacb46abc2b3a48ff86c18687fd6bcff544b67399c1f58"} err="failed to get container status \"1c15047ad12ba0b2ccaacb46abc2b3a48ff86c18687fd6bcff544b67399c1f58\": rpc error: code = NotFound desc = could not find container \"1c15047ad12ba0b2ccaacb46abc2b3a48ff86c18687fd6bcff544b67399c1f58\": container with ID starting with 1c15047ad12ba0b2ccaacb46abc2b3a48ff86c18687fd6bcff544b67399c1f58 not found: ID does not exist" Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.886005 4990 scope.go:117] "RemoveContainer" containerID="9f40912ac8acbc7d6073eec95c9ada9c3e111440563ccc8ddc156e4a1b88b297" Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.902328 4990 scope.go:117] "RemoveContainer" containerID="9f40912ac8acbc7d6073eec95c9ada9c3e111440563ccc8ddc156e4a1b88b297" Dec 05 01:14:58 crc kubenswrapper[4990]: E1205 01:14:58.902834 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f40912ac8acbc7d6073eec95c9ada9c3e111440563ccc8ddc156e4a1b88b297\": container with ID starting with 9f40912ac8acbc7d6073eec95c9ada9c3e111440563ccc8ddc156e4a1b88b297 not found: ID does not exist" containerID="9f40912ac8acbc7d6073eec95c9ada9c3e111440563ccc8ddc156e4a1b88b297" Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.902867 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f40912ac8acbc7d6073eec95c9ada9c3e111440563ccc8ddc156e4a1b88b297"} err="failed to get container status \"9f40912ac8acbc7d6073eec95c9ada9c3e111440563ccc8ddc156e4a1b88b297\": rpc error: code = NotFound desc = could not find container \"9f40912ac8acbc7d6073eec95c9ada9c3e111440563ccc8ddc156e4a1b88b297\": container with ID starting with 9f40912ac8acbc7d6073eec95c9ada9c3e111440563ccc8ddc156e4a1b88b297 not found: ID does not exist" Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.967208 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" podUID="6e0a2a0f-0abc-4786-a996-c8cf5abb3e33" containerName="oauth-openshift" containerID="cri-o://d4813143ea16f48778a869456379e234fc690eee9ecbe2550278af912db1cdba" gracePeriod=15 Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.975435 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9d3b023-842c-4b26-894f-36a286358af4-serving-cert\") pod \"d9d3b023-842c-4b26-894f-36a286358af4\" (UID: \"d9d3b023-842c-4b26-894f-36a286358af4\") " Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.975821 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9d3b023-842c-4b26-894f-36a286358af4-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.975863 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhm98\" (UniqueName: \"kubernetes.io/projected/d9d3b023-842c-4b26-894f-36a286358af4-kube-api-access-rhm98\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.975917 4990 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/916299e4-5311-4f1e-b3d2-5debf1583d1d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.975999 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/916299e4-5311-4f1e-b3d2-5debf1583d1d-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.976044 4990 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/916299e4-5311-4f1e-b3d2-5debf1583d1d-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.976059 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnrhj\" (UniqueName: \"kubernetes.io/projected/916299e4-5311-4f1e-b3d2-5debf1583d1d-kube-api-access-vnrhj\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.976073 4990 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9d3b023-842c-4b26-894f-36a286358af4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.976084 4990 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9d3b023-842c-4b26-894f-36a286358af4-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:58 crc kubenswrapper[4990]: I1205 01:14:58.980011 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9d3b023-842c-4b26-894f-36a286358af4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d9d3b023-842c-4b26-894f-36a286358af4" (UID: "d9d3b023-842c-4b26-894f-36a286358af4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.077964 4990 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9d3b023-842c-4b26-894f-36a286358af4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.225535 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67dbbb759-xwqxn"] Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.230859 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67dbbb759-xwqxn"] Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.246004 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-758cb8b6df-2nvgz"] Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.254975 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-758cb8b6df-2nvgz"] Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.488796 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.684568 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-ocp-branding-template\") pod \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.684631 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-user-template-provider-selection\") pod \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.684672 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-service-ca\") pod \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.684696 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-serving-cert\") pod \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.684725 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-audit-policies\") pod \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.684755 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-user-template-login\") pod \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.684787 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-trusted-ca-bundle\") pod \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.684819 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncmmn\" (UniqueName: \"kubernetes.io/projected/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-kube-api-access-ncmmn\") pod \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.684844 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-user-idp-0-file-data\") pod \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.684890 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-user-template-error\") pod \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.684923 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-router-certs\") pod \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.684953 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-cliconfig\") pod \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.684978 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-audit-dir\") pod \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.685005 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-session\") pod \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\" (UID: \"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33\") " Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.685546 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "6e0a2a0f-0abc-4786-a996-c8cf5abb3e33" (UID: "6e0a2a0f-0abc-4786-a996-c8cf5abb3e33"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.686653 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6e0a2a0f-0abc-4786-a996-c8cf5abb3e33" (UID: "6e0a2a0f-0abc-4786-a996-c8cf5abb3e33"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.686680 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6e0a2a0f-0abc-4786-a996-c8cf5abb3e33" (UID: "6e0a2a0f-0abc-4786-a996-c8cf5abb3e33"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.686771 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6e0a2a0f-0abc-4786-a996-c8cf5abb3e33" (UID: "6e0a2a0f-0abc-4786-a996-c8cf5abb3e33"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.686796 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6e0a2a0f-0abc-4786-a996-c8cf5abb3e33" (UID: "6e0a2a0f-0abc-4786-a996-c8cf5abb3e33"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.692256 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6e0a2a0f-0abc-4786-a996-c8cf5abb3e33" (UID: "6e0a2a0f-0abc-4786-a996-c8cf5abb3e33"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.692723 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6e0a2a0f-0abc-4786-a996-c8cf5abb3e33" (UID: "6e0a2a0f-0abc-4786-a996-c8cf5abb3e33"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.693040 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6e0a2a0f-0abc-4786-a996-c8cf5abb3e33" (UID: "6e0a2a0f-0abc-4786-a996-c8cf5abb3e33"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.693204 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-kube-api-access-ncmmn" (OuterVolumeSpecName: "kube-api-access-ncmmn") pod "6e0a2a0f-0abc-4786-a996-c8cf5abb3e33" (UID: "6e0a2a0f-0abc-4786-a996-c8cf5abb3e33"). InnerVolumeSpecName "kube-api-access-ncmmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.693676 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6e0a2a0f-0abc-4786-a996-c8cf5abb3e33" (UID: "6e0a2a0f-0abc-4786-a996-c8cf5abb3e33"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.694063 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6e0a2a0f-0abc-4786-a996-c8cf5abb3e33" (UID: "6e0a2a0f-0abc-4786-a996-c8cf5abb3e33"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.694324 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6e0a2a0f-0abc-4786-a996-c8cf5abb3e33" (UID: "6e0a2a0f-0abc-4786-a996-c8cf5abb3e33"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.694778 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6e0a2a0f-0abc-4786-a996-c8cf5abb3e33" (UID: "6e0a2a0f-0abc-4786-a996-c8cf5abb3e33"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.695050 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6e0a2a0f-0abc-4786-a996-c8cf5abb3e33" (UID: "6e0a2a0f-0abc-4786-a996-c8cf5abb3e33"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.786401 4990 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.786455 4990 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.786517 4990 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.786530 4990 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.786544 4990 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.786556 4990 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.786569 4990 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.786584 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncmmn\" (UniqueName: \"kubernetes.io/projected/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-kube-api-access-ncmmn\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.786596 4990 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.786641 4990 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.786686 4990 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.786727 4990 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.786738 4990 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.786754 4990 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.881202 4990 generic.go:334] "Generic (PLEG): container finished" podID="6e0a2a0f-0abc-4786-a996-c8cf5abb3e33" containerID="d4813143ea16f48778a869456379e234fc690eee9ecbe2550278af912db1cdba" exitCode=0 Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.881281 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" event={"ID":"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33","Type":"ContainerDied","Data":"d4813143ea16f48778a869456379e234fc690eee9ecbe2550278af912db1cdba"} Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.881323 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" event={"ID":"6e0a2a0f-0abc-4786-a996-c8cf5abb3e33","Type":"ContainerDied","Data":"93dcb27f53f62c5da4d9c7eb3414bff0d15a6833996d8d69ce54b9fc36247193"} Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.881354 4990 scope.go:117] "RemoveContainer" containerID="d4813143ea16f48778a869456379e234fc690eee9ecbe2550278af912db1cdba" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.881574 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lp5lw" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.905122 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b464c887-jgg79"] Dec 05 01:14:59 crc kubenswrapper[4990]: E1205 01:14:59.905403 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9d3b023-842c-4b26-894f-36a286358af4" containerName="controller-manager" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.905426 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9d3b023-842c-4b26-894f-36a286358af4" containerName="controller-manager" Dec 05 01:14:59 crc kubenswrapper[4990]: E1205 01:14:59.905446 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e0a2a0f-0abc-4786-a996-c8cf5abb3e33" containerName="oauth-openshift" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.905454 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e0a2a0f-0abc-4786-a996-c8cf5abb3e33" containerName="oauth-openshift" Dec 05 01:14:59 crc kubenswrapper[4990]: E1205 01:14:59.905475 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="916299e4-5311-4f1e-b3d2-5debf1583d1d" containerName="route-controller-manager" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.905505 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="916299e4-5311-4f1e-b3d2-5debf1583d1d" containerName="route-controller-manager" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.905614 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9d3b023-842c-4b26-894f-36a286358af4" containerName="controller-manager" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.905634 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e0a2a0f-0abc-4786-a996-c8cf5abb3e33" containerName="oauth-openshift" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.905643 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="916299e4-5311-4f1e-b3d2-5debf1583d1d" containerName="route-controller-manager" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.906086 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b464c887-jgg79" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.907815 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.908093 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.908413 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.909459 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.910376 4990 scope.go:117] "RemoveContainer" containerID="d4813143ea16f48778a869456379e234fc690eee9ecbe2550278af912db1cdba" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.910767 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.911104 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 05 01:14:59 crc kubenswrapper[4990]: E1205 01:14:59.911181 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4813143ea16f48778a869456379e234fc690eee9ecbe2550278af912db1cdba\": container with ID starting with d4813143ea16f48778a869456379e234fc690eee9ecbe2550278af912db1cdba not found: ID does not exist" containerID="d4813143ea16f48778a869456379e234fc690eee9ecbe2550278af912db1cdba" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.911269 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4813143ea16f48778a869456379e234fc690eee9ecbe2550278af912db1cdba"} err="failed to get container status \"d4813143ea16f48778a869456379e234fc690eee9ecbe2550278af912db1cdba\": rpc error: code = NotFound desc = could not find container \"d4813143ea16f48778a869456379e234fc690eee9ecbe2550278af912db1cdba\": container with ID starting with d4813143ea16f48778a869456379e234fc690eee9ecbe2550278af912db1cdba not found: ID does not exist" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.911845 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57b7659f6d-5x9cp"] Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.913276 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57b7659f6d-5x9cp" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.915756 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.915950 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.918626 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.918757 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.918813 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.918954 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.922046 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.923371 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b464c887-jgg79"] Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.941286 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="916299e4-5311-4f1e-b3d2-5debf1583d1d" path="/var/lib/kubelet/pods/916299e4-5311-4f1e-b3d2-5debf1583d1d/volumes" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.942743 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9d3b023-842c-4b26-894f-36a286358af4" path="/var/lib/kubelet/pods/d9d3b023-842c-4b26-894f-36a286358af4/volumes" Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.943546 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57b7659f6d-5x9cp"] Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.963158 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lp5lw"] Dec 05 01:14:59 crc kubenswrapper[4990]: I1205 01:14:59.967583 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lp5lw"] Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.090095 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97f8feea-a619-4447-bff8-b7d861850d23-serving-cert\") pod \"route-controller-manager-57b7659f6d-5x9cp\" (UID: \"97f8feea-a619-4447-bff8-b7d861850d23\") " pod="openshift-route-controller-manager/route-controller-manager-57b7659f6d-5x9cp" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.090211 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc5nl\" (UniqueName: \"kubernetes.io/projected/968ec657-c471-445f-8f05-fb219cdbc7dc-kube-api-access-wc5nl\") pod \"controller-manager-65b464c887-jgg79\" (UID: \"968ec657-c471-445f-8f05-fb219cdbc7dc\") " pod="openshift-controller-manager/controller-manager-65b464c887-jgg79" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.090332 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/968ec657-c471-445f-8f05-fb219cdbc7dc-config\") pod \"controller-manager-65b464c887-jgg79\" (UID: \"968ec657-c471-445f-8f05-fb219cdbc7dc\") " pod="openshift-controller-manager/controller-manager-65b464c887-jgg79" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.090435 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/968ec657-c471-445f-8f05-fb219cdbc7dc-client-ca\") pod \"controller-manager-65b464c887-jgg79\" (UID: \"968ec657-c471-445f-8f05-fb219cdbc7dc\") " pod="openshift-controller-manager/controller-manager-65b464c887-jgg79" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.090476 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/968ec657-c471-445f-8f05-fb219cdbc7dc-proxy-ca-bundles\") pod \"controller-manager-65b464c887-jgg79\" (UID: \"968ec657-c471-445f-8f05-fb219cdbc7dc\") " pod="openshift-controller-manager/controller-manager-65b464c887-jgg79" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.090527 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/968ec657-c471-445f-8f05-fb219cdbc7dc-serving-cert\") pod \"controller-manager-65b464c887-jgg79\" (UID: \"968ec657-c471-445f-8f05-fb219cdbc7dc\") " pod="openshift-controller-manager/controller-manager-65b464c887-jgg79" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.090546 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97f8feea-a619-4447-bff8-b7d861850d23-config\") pod \"route-controller-manager-57b7659f6d-5x9cp\" (UID: \"97f8feea-a619-4447-bff8-b7d861850d23\") " pod="openshift-route-controller-manager/route-controller-manager-57b7659f6d-5x9cp" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.090648 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97f8feea-a619-4447-bff8-b7d861850d23-client-ca\") pod \"route-controller-manager-57b7659f6d-5x9cp\" (UID: \"97f8feea-a619-4447-bff8-b7d861850d23\") " pod="openshift-route-controller-manager/route-controller-manager-57b7659f6d-5x9cp" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.090701 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqlx2\" (UniqueName: \"kubernetes.io/projected/97f8feea-a619-4447-bff8-b7d861850d23-kube-api-access-zqlx2\") pod \"route-controller-manager-57b7659f6d-5x9cp\" (UID: \"97f8feea-a619-4447-bff8-b7d861850d23\") " pod="openshift-route-controller-manager/route-controller-manager-57b7659f6d-5x9cp" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.177304 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414955-b9mth"] Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.178807 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414955-b9mth" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.182196 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.182697 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.191236 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414955-b9mth"] Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.191922 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/968ec657-c471-445f-8f05-fb219cdbc7dc-proxy-ca-bundles\") pod \"controller-manager-65b464c887-jgg79\" (UID: \"968ec657-c471-445f-8f05-fb219cdbc7dc\") " pod="openshift-controller-manager/controller-manager-65b464c887-jgg79" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.191988 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2d54377-74a2-4ffd-89c9-f934ba34b18c-secret-volume\") pod \"collect-profiles-29414955-b9mth\" (UID: \"e2d54377-74a2-4ffd-89c9-f934ba34b18c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414955-b9mth" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.192025 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/968ec657-c471-445f-8f05-fb219cdbc7dc-serving-cert\") pod \"controller-manager-65b464c887-jgg79\" (UID: \"968ec657-c471-445f-8f05-fb219cdbc7dc\") " pod="openshift-controller-manager/controller-manager-65b464c887-jgg79" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.192058 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97f8feea-a619-4447-bff8-b7d861850d23-config\") pod \"route-controller-manager-57b7659f6d-5x9cp\" (UID: \"97f8feea-a619-4447-bff8-b7d861850d23\") " pod="openshift-route-controller-manager/route-controller-manager-57b7659f6d-5x9cp" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.192115 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97f8feea-a619-4447-bff8-b7d861850d23-client-ca\") pod \"route-controller-manager-57b7659f6d-5x9cp\" (UID: \"97f8feea-a619-4447-bff8-b7d861850d23\") " pod="openshift-route-controller-manager/route-controller-manager-57b7659f6d-5x9cp" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.192198 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqlx2\" (UniqueName: \"kubernetes.io/projected/97f8feea-a619-4447-bff8-b7d861850d23-kube-api-access-zqlx2\") pod \"route-controller-manager-57b7659f6d-5x9cp\" (UID: \"97f8feea-a619-4447-bff8-b7d861850d23\") " pod="openshift-route-controller-manager/route-controller-manager-57b7659f6d-5x9cp" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.192252 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2d54377-74a2-4ffd-89c9-f934ba34b18c-config-volume\") pod \"collect-profiles-29414955-b9mth\" (UID: \"e2d54377-74a2-4ffd-89c9-f934ba34b18c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414955-b9mth" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.192291 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97f8feea-a619-4447-bff8-b7d861850d23-serving-cert\") pod \"route-controller-manager-57b7659f6d-5x9cp\" (UID: \"97f8feea-a619-4447-bff8-b7d861850d23\") " pod="openshift-route-controller-manager/route-controller-manager-57b7659f6d-5x9cp" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.192341 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc5nl\" (UniqueName: \"kubernetes.io/projected/968ec657-c471-445f-8f05-fb219cdbc7dc-kube-api-access-wc5nl\") pod \"controller-manager-65b464c887-jgg79\" (UID: \"968ec657-c471-445f-8f05-fb219cdbc7dc\") " pod="openshift-controller-manager/controller-manager-65b464c887-jgg79" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.192389 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnr4w\" (UniqueName: \"kubernetes.io/projected/e2d54377-74a2-4ffd-89c9-f934ba34b18c-kube-api-access-xnr4w\") pod \"collect-profiles-29414955-b9mth\" (UID: \"e2d54377-74a2-4ffd-89c9-f934ba34b18c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414955-b9mth" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.192434 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/968ec657-c471-445f-8f05-fb219cdbc7dc-config\") pod \"controller-manager-65b464c887-jgg79\" (UID: \"968ec657-c471-445f-8f05-fb219cdbc7dc\") " pod="openshift-controller-manager/controller-manager-65b464c887-jgg79" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.192511 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/968ec657-c471-445f-8f05-fb219cdbc7dc-client-ca\") pod \"controller-manager-65b464c887-jgg79\" (UID: \"968ec657-c471-445f-8f05-fb219cdbc7dc\") " pod="openshift-controller-manager/controller-manager-65b464c887-jgg79" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.193257 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97f8feea-a619-4447-bff8-b7d861850d23-client-ca\") pod \"route-controller-manager-57b7659f6d-5x9cp\" (UID: \"97f8feea-a619-4447-bff8-b7d861850d23\") " pod="openshift-route-controller-manager/route-controller-manager-57b7659f6d-5x9cp" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.193699 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/968ec657-c471-445f-8f05-fb219cdbc7dc-client-ca\") pod \"controller-manager-65b464c887-jgg79\" (UID: \"968ec657-c471-445f-8f05-fb219cdbc7dc\") " pod="openshift-controller-manager/controller-manager-65b464c887-jgg79" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.194502 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97f8feea-a619-4447-bff8-b7d861850d23-config\") pod \"route-controller-manager-57b7659f6d-5x9cp\" (UID: \"97f8feea-a619-4447-bff8-b7d861850d23\") " pod="openshift-route-controller-manager/route-controller-manager-57b7659f6d-5x9cp" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.195105 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/968ec657-c471-445f-8f05-fb219cdbc7dc-proxy-ca-bundles\") pod \"controller-manager-65b464c887-jgg79\" (UID: \"968ec657-c471-445f-8f05-fb219cdbc7dc\") " pod="openshift-controller-manager/controller-manager-65b464c887-jgg79" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.196514 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/968ec657-c471-445f-8f05-fb219cdbc7dc-config\") pod \"controller-manager-65b464c887-jgg79\" (UID: \"968ec657-c471-445f-8f05-fb219cdbc7dc\") " pod="openshift-controller-manager/controller-manager-65b464c887-jgg79" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.202990 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/968ec657-c471-445f-8f05-fb219cdbc7dc-serving-cert\") pod \"controller-manager-65b464c887-jgg79\" (UID: \"968ec657-c471-445f-8f05-fb219cdbc7dc\") " pod="openshift-controller-manager/controller-manager-65b464c887-jgg79" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.203863 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97f8feea-a619-4447-bff8-b7d861850d23-serving-cert\") pod \"route-controller-manager-57b7659f6d-5x9cp\" (UID: \"97f8feea-a619-4447-bff8-b7d861850d23\") " pod="openshift-route-controller-manager/route-controller-manager-57b7659f6d-5x9cp" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.216070 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc5nl\" (UniqueName: \"kubernetes.io/projected/968ec657-c471-445f-8f05-fb219cdbc7dc-kube-api-access-wc5nl\") pod \"controller-manager-65b464c887-jgg79\" (UID: \"968ec657-c471-445f-8f05-fb219cdbc7dc\") " pod="openshift-controller-manager/controller-manager-65b464c887-jgg79" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.218635 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqlx2\" (UniqueName: \"kubernetes.io/projected/97f8feea-a619-4447-bff8-b7d861850d23-kube-api-access-zqlx2\") pod \"route-controller-manager-57b7659f6d-5x9cp\" (UID: \"97f8feea-a619-4447-bff8-b7d861850d23\") " pod="openshift-route-controller-manager/route-controller-manager-57b7659f6d-5x9cp" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.235126 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b464c887-jgg79" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.264120 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57b7659f6d-5x9cp" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.293416 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2d54377-74a2-4ffd-89c9-f934ba34b18c-secret-volume\") pod \"collect-profiles-29414955-b9mth\" (UID: \"e2d54377-74a2-4ffd-89c9-f934ba34b18c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414955-b9mth" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.293533 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2d54377-74a2-4ffd-89c9-f934ba34b18c-config-volume\") pod \"collect-profiles-29414955-b9mth\" (UID: \"e2d54377-74a2-4ffd-89c9-f934ba34b18c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414955-b9mth" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.293727 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnr4w\" (UniqueName: \"kubernetes.io/projected/e2d54377-74a2-4ffd-89c9-f934ba34b18c-kube-api-access-xnr4w\") pod \"collect-profiles-29414955-b9mth\" (UID: \"e2d54377-74a2-4ffd-89c9-f934ba34b18c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414955-b9mth" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.294673 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2d54377-74a2-4ffd-89c9-f934ba34b18c-config-volume\") pod \"collect-profiles-29414955-b9mth\" (UID: \"e2d54377-74a2-4ffd-89c9-f934ba34b18c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414955-b9mth" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.296966 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2d54377-74a2-4ffd-89c9-f934ba34b18c-secret-volume\") pod \"collect-profiles-29414955-b9mth\" (UID: \"e2d54377-74a2-4ffd-89c9-f934ba34b18c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414955-b9mth" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.308804 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnr4w\" (UniqueName: \"kubernetes.io/projected/e2d54377-74a2-4ffd-89c9-f934ba34b18c-kube-api-access-xnr4w\") pod \"collect-profiles-29414955-b9mth\" (UID: \"e2d54377-74a2-4ffd-89c9-f934ba34b18c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414955-b9mth" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.505473 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414955-b9mth" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.652962 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b464c887-jgg79"] Dec 05 01:15:00 crc kubenswrapper[4990]: W1205 01:15:00.658915 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod968ec657_c471_445f_8f05_fb219cdbc7dc.slice/crio-b663867e10dd28deb3ef2ed7d1be5412a032c7a33717c4e9af2ae8c2c2ddf130 WatchSource:0}: Error finding container b663867e10dd28deb3ef2ed7d1be5412a032c7a33717c4e9af2ae8c2c2ddf130: Status 404 returned error can't find the container with id b663867e10dd28deb3ef2ed7d1be5412a032c7a33717c4e9af2ae8c2c2ddf130 Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.684111 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57b7659f6d-5x9cp"] Dec 05 01:15:00 crc kubenswrapper[4990]: W1205 01:15:00.700152 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97f8feea_a619_4447_bff8_b7d861850d23.slice/crio-339bdffc7a4805576905cb82055dc2b70ba8d71724bfb8fc671b5cd6a4091f46 WatchSource:0}: Error finding container 339bdffc7a4805576905cb82055dc2b70ba8d71724bfb8fc671b5cd6a4091f46: Status 404 returned error can't find the container with id 339bdffc7a4805576905cb82055dc2b70ba8d71724bfb8fc671b5cd6a4091f46 Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.887509 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57b7659f6d-5x9cp" event={"ID":"97f8feea-a619-4447-bff8-b7d861850d23","Type":"ContainerStarted","Data":"c69e238b45cc9f28db9bd1be34bf86c27f9552a1bab06ae838799ebdedabb77a"} Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.887791 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57b7659f6d-5x9cp" event={"ID":"97f8feea-a619-4447-bff8-b7d861850d23","Type":"ContainerStarted","Data":"339bdffc7a4805576905cb82055dc2b70ba8d71724bfb8fc671b5cd6a4091f46"} Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.887928 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-57b7659f6d-5x9cp" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.890341 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b464c887-jgg79" event={"ID":"968ec657-c471-445f-8f05-fb219cdbc7dc","Type":"ContainerStarted","Data":"3288dbdc7c9d075c799b5312c93adf548b00b74e5a06911946ccc8d48ee04dae"} Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.890379 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b464c887-jgg79" event={"ID":"968ec657-c471-445f-8f05-fb219cdbc7dc","Type":"ContainerStarted","Data":"b663867e10dd28deb3ef2ed7d1be5412a032c7a33717c4e9af2ae8c2c2ddf130"} Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.890756 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-65b464c887-jgg79" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.894999 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b464c887-jgg79" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.913565 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-57b7659f6d-5x9cp" podStartSLOduration=2.913540654 podStartE2EDuration="2.913540654s" podCreationTimestamp="2025-12-05 01:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:15:00.906530283 +0000 UTC m=+399.282745644" watchObservedRunningTime="2025-12-05 01:15:00.913540654 +0000 UTC m=+399.289756055" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.927002 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b464c887-jgg79" podStartSLOduration=2.926970889 podStartE2EDuration="2.926970889s" podCreationTimestamp="2025-12-05 01:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:15:00.921721158 +0000 UTC m=+399.297936529" watchObservedRunningTime="2025-12-05 01:15:00.926970889 +0000 UTC m=+399.303186270" Dec 05 01:15:00 crc kubenswrapper[4990]: I1205 01:15:00.978200 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414955-b9mth"] Dec 05 01:15:00 crc kubenswrapper[4990]: W1205 01:15:00.984846 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2d54377_74a2_4ffd_89c9_f934ba34b18c.slice/crio-e09f450086536735a8e03cb99718d95a71c57d402e322ff597414cd8c0b281a6 WatchSource:0}: Error finding container e09f450086536735a8e03cb99718d95a71c57d402e322ff597414cd8c0b281a6: Status 404 returned error can't find the container with id e09f450086536735a8e03cb99718d95a71c57d402e322ff597414cd8c0b281a6 Dec 05 01:15:01 crc kubenswrapper[4990]: I1205 01:15:01.222751 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-57b7659f6d-5x9cp" Dec 05 01:15:01 crc kubenswrapper[4990]: I1205 01:15:01.903887 4990 generic.go:334] "Generic (PLEG): container finished" podID="e2d54377-74a2-4ffd-89c9-f934ba34b18c" containerID="d17cb2c351eca6eee065f89515a185868f02fbb50d2b2bd7c3469c7a50d2f837" exitCode=0 Dec 05 01:15:01 crc kubenswrapper[4990]: I1205 01:15:01.903977 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414955-b9mth" event={"ID":"e2d54377-74a2-4ffd-89c9-f934ba34b18c","Type":"ContainerDied","Data":"d17cb2c351eca6eee065f89515a185868f02fbb50d2b2bd7c3469c7a50d2f837"} Dec 05 01:15:01 crc kubenswrapper[4990]: I1205 01:15:01.905734 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414955-b9mth" event={"ID":"e2d54377-74a2-4ffd-89c9-f934ba34b18c","Type":"ContainerStarted","Data":"e09f450086536735a8e03cb99718d95a71c57d402e322ff597414cd8c0b281a6"} Dec 05 01:15:01 crc kubenswrapper[4990]: I1205 01:15:01.942223 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e0a2a0f-0abc-4786-a996-c8cf5abb3e33" path="/var/lib/kubelet/pods/6e0a2a0f-0abc-4786-a996-c8cf5abb3e33/volumes" Dec 05 01:15:03 crc kubenswrapper[4990]: I1205 01:15:03.203028 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414955-b9mth" Dec 05 01:15:03 crc kubenswrapper[4990]: I1205 01:15:03.234922 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnr4w\" (UniqueName: \"kubernetes.io/projected/e2d54377-74a2-4ffd-89c9-f934ba34b18c-kube-api-access-xnr4w\") pod \"e2d54377-74a2-4ffd-89c9-f934ba34b18c\" (UID: \"e2d54377-74a2-4ffd-89c9-f934ba34b18c\") " Dec 05 01:15:03 crc kubenswrapper[4990]: I1205 01:15:03.243738 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2d54377-74a2-4ffd-89c9-f934ba34b18c-kube-api-access-xnr4w" (OuterVolumeSpecName: "kube-api-access-xnr4w") pod "e2d54377-74a2-4ffd-89c9-f934ba34b18c" (UID: "e2d54377-74a2-4ffd-89c9-f934ba34b18c"). InnerVolumeSpecName "kube-api-access-xnr4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:15:03 crc kubenswrapper[4990]: I1205 01:15:03.336083 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2d54377-74a2-4ffd-89c9-f934ba34b18c-secret-volume\") pod \"e2d54377-74a2-4ffd-89c9-f934ba34b18c\" (UID: \"e2d54377-74a2-4ffd-89c9-f934ba34b18c\") " Dec 05 01:15:03 crc kubenswrapper[4990]: I1205 01:15:03.336169 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2d54377-74a2-4ffd-89c9-f934ba34b18c-config-volume\") pod \"e2d54377-74a2-4ffd-89c9-f934ba34b18c\" (UID: \"e2d54377-74a2-4ffd-89c9-f934ba34b18c\") " Dec 05 01:15:03 crc kubenswrapper[4990]: I1205 01:15:03.336583 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnr4w\" (UniqueName: \"kubernetes.io/projected/e2d54377-74a2-4ffd-89c9-f934ba34b18c-kube-api-access-xnr4w\") on node \"crc\" DevicePath \"\"" Dec 05 01:15:03 crc kubenswrapper[4990]: I1205 01:15:03.337549 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2d54377-74a2-4ffd-89c9-f934ba34b18c-config-volume" (OuterVolumeSpecName: "config-volume") pod "e2d54377-74a2-4ffd-89c9-f934ba34b18c" (UID: "e2d54377-74a2-4ffd-89c9-f934ba34b18c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:15:03 crc kubenswrapper[4990]: I1205 01:15:03.340338 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2d54377-74a2-4ffd-89c9-f934ba34b18c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e2d54377-74a2-4ffd-89c9-f934ba34b18c" (UID: "e2d54377-74a2-4ffd-89c9-f934ba34b18c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:15:03 crc kubenswrapper[4990]: I1205 01:15:03.437537 4990 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2d54377-74a2-4ffd-89c9-f934ba34b18c-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 01:15:03 crc kubenswrapper[4990]: I1205 01:15:03.437577 4990 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2d54377-74a2-4ffd-89c9-f934ba34b18c-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 01:15:03 crc kubenswrapper[4990]: I1205 01:15:03.920895 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414955-b9mth" event={"ID":"e2d54377-74a2-4ffd-89c9-f934ba34b18c","Type":"ContainerDied","Data":"e09f450086536735a8e03cb99718d95a71c57d402e322ff597414cd8c0b281a6"} Dec 05 01:15:03 crc kubenswrapper[4990]: I1205 01:15:03.920977 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414955-b9mth" Dec 05 01:15:03 crc kubenswrapper[4990]: I1205 01:15:03.920982 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e09f450086536735a8e03cb99718d95a71c57d402e322ff597414cd8c0b281a6" Dec 05 01:15:08 crc kubenswrapper[4990]: I1205 01:15:08.914152 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-779f94dc95-thsxj"] Dec 05 01:15:08 crc kubenswrapper[4990]: E1205 01:15:08.915518 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d54377-74a2-4ffd-89c9-f934ba34b18c" containerName="collect-profiles" Dec 05 01:15:08 crc kubenswrapper[4990]: I1205 01:15:08.915537 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d54377-74a2-4ffd-89c9-f934ba34b18c" containerName="collect-profiles" Dec 05 01:15:08 crc kubenswrapper[4990]: I1205 01:15:08.915688 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2d54377-74a2-4ffd-89c9-f934ba34b18c" containerName="collect-profiles" Dec 05 01:15:08 crc kubenswrapper[4990]: I1205 01:15:08.916215 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:08 crc kubenswrapper[4990]: I1205 01:15:08.929739 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 05 01:15:08 crc kubenswrapper[4990]: I1205 01:15:08.934612 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 05 01:15:08 crc kubenswrapper[4990]: I1205 01:15:08.934995 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 05 01:15:08 crc kubenswrapper[4990]: I1205 01:15:08.935918 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 05 01:15:08 crc kubenswrapper[4990]: I1205 01:15:08.936263 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 05 01:15:08 crc kubenswrapper[4990]: I1205 01:15:08.936309 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 05 01:15:08 crc kubenswrapper[4990]: I1205 01:15:08.936325 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 05 01:15:08 crc kubenswrapper[4990]: I1205 01:15:08.936457 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 05 01:15:08 crc kubenswrapper[4990]: I1205 01:15:08.936644 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 05 01:15:08 crc kubenswrapper[4990]: I1205 01:15:08.936832 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 05 01:15:08 crc kubenswrapper[4990]: I1205 01:15:08.938260 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 05 01:15:08 crc kubenswrapper[4990]: I1205 01:15:08.938656 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 05 01:15:08 crc kubenswrapper[4990]: I1205 01:15:08.958429 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 05 01:15:08 crc kubenswrapper[4990]: I1205 01:15:08.958959 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-779f94dc95-thsxj"] Dec 05 01:15:08 crc kubenswrapper[4990]: I1205 01:15:08.963243 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 05 01:15:08 crc kubenswrapper[4990]: I1205 01:15:08.966471 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.023329 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-v4-0-config-user-template-login\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.023405 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-v4-0-config-system-session\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.023475 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.023561 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.023626 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.023659 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sszgv\" (UniqueName: \"kubernetes.io/projected/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-kube-api-access-sszgv\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.023699 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-v4-0-config-user-template-error\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.023744 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-audit-dir\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.023916 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-v4-0-config-system-serving-cert\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.024009 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-v4-0-config-system-cliconfig\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.024062 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-audit-policies\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.024188 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.024257 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-v4-0-config-system-service-ca\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.024293 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-v4-0-config-system-router-certs\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.125916 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-v4-0-config-user-template-login\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.126007 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-v4-0-config-system-session\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.126071 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.126123 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.126218 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.126263 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sszgv\" (UniqueName: \"kubernetes.io/projected/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-kube-api-access-sszgv\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.126314 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-v4-0-config-user-template-error\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.126364 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-audit-dir\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.126430 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-v4-0-config-system-serving-cert\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.126508 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-v4-0-config-system-cliconfig\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.126544 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-audit-policies\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.126588 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.126630 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-v4-0-config-system-service-ca\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.126662 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-v4-0-config-system-router-certs\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.127537 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-audit-dir\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.128814 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-audit-policies\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.129238 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-v4-0-config-system-cliconfig\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.129310 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.130139 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-v4-0-config-system-service-ca\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.134761 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-v4-0-config-user-template-error\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.135147 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-v4-0-config-system-session\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.135853 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-v4-0-config-system-router-certs\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.135963 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.136627 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-v4-0-config-system-serving-cert\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.137788 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.138254 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-v4-0-config-user-template-login\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.140190 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.157042 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sszgv\" (UniqueName: \"kubernetes.io/projected/8d45a6a9-d3b6-409f-9bb8-a9fff7958930-kube-api-access-sszgv\") pod \"oauth-openshift-779f94dc95-thsxj\" (UID: \"8d45a6a9-d3b6-409f-9bb8-a9fff7958930\") " pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.254522 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.731542 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-779f94dc95-thsxj"] Dec 05 01:15:09 crc kubenswrapper[4990]: I1205 01:15:09.971456 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" event={"ID":"8d45a6a9-d3b6-409f-9bb8-a9fff7958930","Type":"ContainerStarted","Data":"32004291b6a0e5d464e47926720823691e18656ff99daa6575300faad1361391"} Dec 05 01:15:10 crc kubenswrapper[4990]: I1205 01:15:10.982350 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" event={"ID":"8d45a6a9-d3b6-409f-9bb8-a9fff7958930","Type":"ContainerStarted","Data":"fa987cd1b0d730be5128ae9f322967241f1f12f12ff68695e7fca7ae8ba37ff8"} Dec 05 01:15:10 crc kubenswrapper[4990]: I1205 01:15:10.982783 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:10 crc kubenswrapper[4990]: I1205 01:15:10.992606 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" Dec 05 01:15:11 crc kubenswrapper[4990]: I1205 01:15:11.017072 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-779f94dc95-thsxj" podStartSLOduration=38.017043121 podStartE2EDuration="38.017043121s" podCreationTimestamp="2025-12-05 01:14:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:15:11.0145686 +0000 UTC m=+409.390784021" watchObservedRunningTime="2025-12-05 01:15:11.017043121 +0000 UTC m=+409.393258512" Dec 05 01:15:21 crc kubenswrapper[4990]: I1205 01:15:21.823560 4990 patch_prober.go:28] interesting pod/machine-config-daemon-zxlh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 01:15:21 crc kubenswrapper[4990]: I1205 01:15:21.824298 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 01:15:21 crc kubenswrapper[4990]: I1205 01:15:21.824373 4990 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" Dec 05 01:15:21 crc kubenswrapper[4990]: I1205 01:15:21.825266 4990 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b40660c456587b8dd05170b85828fefb6a3f2c0ef31ac80948416bdfddfbcfec"} pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 01:15:21 crc kubenswrapper[4990]: I1205 01:15:21.825369 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" containerID="cri-o://b40660c456587b8dd05170b85828fefb6a3f2c0ef31ac80948416bdfddfbcfec" gracePeriod=600 Dec 05 01:15:22 crc kubenswrapper[4990]: I1205 01:15:22.063346 4990 generic.go:334] "Generic (PLEG): container finished" podID="b6580a04-67de-48f9-9da2-56cb4377af48" containerID="b40660c456587b8dd05170b85828fefb6a3f2c0ef31ac80948416bdfddfbcfec" exitCode=0 Dec 05 01:15:22 crc kubenswrapper[4990]: I1205 01:15:22.063412 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" event={"ID":"b6580a04-67de-48f9-9da2-56cb4377af48","Type":"ContainerDied","Data":"b40660c456587b8dd05170b85828fefb6a3f2c0ef31ac80948416bdfddfbcfec"} Dec 05 01:15:22 crc kubenswrapper[4990]: I1205 01:15:22.063718 4990 scope.go:117] "RemoveContainer" containerID="2baa83898affd12bca0f0dda11dd7b7364fe951a1fad7501b08658f3c7e11d11" Dec 05 01:15:23 crc kubenswrapper[4990]: I1205 01:15:23.074158 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" event={"ID":"b6580a04-67de-48f9-9da2-56cb4377af48","Type":"ContainerStarted","Data":"0fff34981cf4773bd36204e463d90f40dceebdf614dbe550df05744f4f5aade7"} Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.023313 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rrngg"] Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.024183 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rrngg" podUID="0cbdb0ba-36d8-4cb7-878a-88afedb7983c" containerName="registry-server" containerID="cri-o://6c688b54455cfd946ea7f7044077f20d8b6e7a814384c36313fff951e295fff0" gracePeriod=30 Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.030132 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7vm5c"] Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.030510 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7vm5c" podUID="4100dc4e-10a0-4d5c-b441-c87e80787d93" containerName="registry-server" containerID="cri-o://08d6a3ae4ea9ee88069928b52717f6104548705c0345e1007388a53a2c7c0a26" gracePeriod=30 Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.039648 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nkp9t"] Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.039932 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-nkp9t" podUID="f901191e-752f-4cca-bf08-3274cf6a9254" containerName="marketplace-operator" containerID="cri-o://a17f708f9ab3738b1d4a6f860a5814b4938a31c15609e34442334d78ade5b5dd" gracePeriod=30 Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.043113 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mg56k"] Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.043333 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mg56k" podUID="df8138ec-8df1-4959-90dd-ee4a224c92f8" containerName="registry-server" containerID="cri-o://c10460cca96f632974506a9ca70c7b67451ba042fbe8699b563fab9336ca1b67" gracePeriod=30 Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.056006 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lz2ns"] Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.056995 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lz2ns" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.062037 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kthx9"] Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.062301 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kthx9" podUID="4decd32e-d179-4ec7-9ec0-c8744ef37b47" containerName="registry-server" containerID="cri-o://76b2b2e6f015dd6a26d82cab2e56c758000785d4eac43508089edd88802dc725" gracePeriod=30 Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.077160 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lz2ns"] Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.160397 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/24357b8a-a4f6-43dd-ac9f-d563fa8762d4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lz2ns\" (UID: \"24357b8a-a4f6-43dd-ac9f-d563fa8762d4\") " pod="openshift-marketplace/marketplace-operator-79b997595-lz2ns" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.160441 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrlv2\" (UniqueName: \"kubernetes.io/projected/24357b8a-a4f6-43dd-ac9f-d563fa8762d4-kube-api-access-nrlv2\") pod \"marketplace-operator-79b997595-lz2ns\" (UID: \"24357b8a-a4f6-43dd-ac9f-d563fa8762d4\") " pod="openshift-marketplace/marketplace-operator-79b997595-lz2ns" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.160464 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/24357b8a-a4f6-43dd-ac9f-d563fa8762d4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lz2ns\" (UID: \"24357b8a-a4f6-43dd-ac9f-d563fa8762d4\") " pod="openshift-marketplace/marketplace-operator-79b997595-lz2ns" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.255929 4990 generic.go:334] "Generic (PLEG): container finished" podID="0cbdb0ba-36d8-4cb7-878a-88afedb7983c" containerID="6c688b54455cfd946ea7f7044077f20d8b6e7a814384c36313fff951e295fff0" exitCode=0 Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.256018 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrngg" event={"ID":"0cbdb0ba-36d8-4cb7-878a-88afedb7983c","Type":"ContainerDied","Data":"6c688b54455cfd946ea7f7044077f20d8b6e7a814384c36313fff951e295fff0"} Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.260334 4990 generic.go:334] "Generic (PLEG): container finished" podID="4100dc4e-10a0-4d5c-b441-c87e80787d93" containerID="08d6a3ae4ea9ee88069928b52717f6104548705c0345e1007388a53a2c7c0a26" exitCode=0 Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.260446 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7vm5c" event={"ID":"4100dc4e-10a0-4d5c-b441-c87e80787d93","Type":"ContainerDied","Data":"08d6a3ae4ea9ee88069928b52717f6104548705c0345e1007388a53a2c7c0a26"} Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.261392 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/24357b8a-a4f6-43dd-ac9f-d563fa8762d4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lz2ns\" (UID: \"24357b8a-a4f6-43dd-ac9f-d563fa8762d4\") " pod="openshift-marketplace/marketplace-operator-79b997595-lz2ns" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.261440 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrlv2\" (UniqueName: \"kubernetes.io/projected/24357b8a-a4f6-43dd-ac9f-d563fa8762d4-kube-api-access-nrlv2\") pod \"marketplace-operator-79b997595-lz2ns\" (UID: \"24357b8a-a4f6-43dd-ac9f-d563fa8762d4\") " pod="openshift-marketplace/marketplace-operator-79b997595-lz2ns" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.261468 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/24357b8a-a4f6-43dd-ac9f-d563fa8762d4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lz2ns\" (UID: \"24357b8a-a4f6-43dd-ac9f-d563fa8762d4\") " pod="openshift-marketplace/marketplace-operator-79b997595-lz2ns" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.262829 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/24357b8a-a4f6-43dd-ac9f-d563fa8762d4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lz2ns\" (UID: \"24357b8a-a4f6-43dd-ac9f-d563fa8762d4\") " pod="openshift-marketplace/marketplace-operator-79b997595-lz2ns" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.263262 4990 generic.go:334] "Generic (PLEG): container finished" podID="4decd32e-d179-4ec7-9ec0-c8744ef37b47" containerID="76b2b2e6f015dd6a26d82cab2e56c758000785d4eac43508089edd88802dc725" exitCode=0 Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.263339 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kthx9" event={"ID":"4decd32e-d179-4ec7-9ec0-c8744ef37b47","Type":"ContainerDied","Data":"76b2b2e6f015dd6a26d82cab2e56c758000785d4eac43508089edd88802dc725"} Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.268245 4990 generic.go:334] "Generic (PLEG): container finished" podID="df8138ec-8df1-4959-90dd-ee4a224c92f8" containerID="c10460cca96f632974506a9ca70c7b67451ba042fbe8699b563fab9336ca1b67" exitCode=0 Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.268295 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mg56k" event={"ID":"df8138ec-8df1-4959-90dd-ee4a224c92f8","Type":"ContainerDied","Data":"c10460cca96f632974506a9ca70c7b67451ba042fbe8699b563fab9336ca1b67"} Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.271663 4990 generic.go:334] "Generic (PLEG): container finished" podID="f901191e-752f-4cca-bf08-3274cf6a9254" containerID="a17f708f9ab3738b1d4a6f860a5814b4938a31c15609e34442334d78ade5b5dd" exitCode=0 Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.271713 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nkp9t" event={"ID":"f901191e-752f-4cca-bf08-3274cf6a9254","Type":"ContainerDied","Data":"a17f708f9ab3738b1d4a6f860a5814b4938a31c15609e34442334d78ade5b5dd"} Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.271881 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/24357b8a-a4f6-43dd-ac9f-d563fa8762d4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lz2ns\" (UID: \"24357b8a-a4f6-43dd-ac9f-d563fa8762d4\") " pod="openshift-marketplace/marketplace-operator-79b997595-lz2ns" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.277368 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrlv2\" (UniqueName: \"kubernetes.io/projected/24357b8a-a4f6-43dd-ac9f-d563fa8762d4-kube-api-access-nrlv2\") pod \"marketplace-operator-79b997595-lz2ns\" (UID: \"24357b8a-a4f6-43dd-ac9f-d563fa8762d4\") " pod="openshift-marketplace/marketplace-operator-79b997595-lz2ns" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.517226 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lz2ns" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.520378 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rrngg" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.589594 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nkp9t" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.667012 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cbdb0ba-36d8-4cb7-878a-88afedb7983c-catalog-content\") pod \"0cbdb0ba-36d8-4cb7-878a-88afedb7983c\" (UID: \"0cbdb0ba-36d8-4cb7-878a-88afedb7983c\") " Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.667084 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxjzd\" (UniqueName: \"kubernetes.io/projected/0cbdb0ba-36d8-4cb7-878a-88afedb7983c-kube-api-access-nxjzd\") pod \"0cbdb0ba-36d8-4cb7-878a-88afedb7983c\" (UID: \"0cbdb0ba-36d8-4cb7-878a-88afedb7983c\") " Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.667182 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cbdb0ba-36d8-4cb7-878a-88afedb7983c-utilities\") pod \"0cbdb0ba-36d8-4cb7-878a-88afedb7983c\" (UID: \"0cbdb0ba-36d8-4cb7-878a-88afedb7983c\") " Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.669281 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cbdb0ba-36d8-4cb7-878a-88afedb7983c-utilities" (OuterVolumeSpecName: "utilities") pod "0cbdb0ba-36d8-4cb7-878a-88afedb7983c" (UID: "0cbdb0ba-36d8-4cb7-878a-88afedb7983c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.675584 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cbdb0ba-36d8-4cb7-878a-88afedb7983c-kube-api-access-nxjzd" (OuterVolumeSpecName: "kube-api-access-nxjzd") pod "0cbdb0ba-36d8-4cb7-878a-88afedb7983c" (UID: "0cbdb0ba-36d8-4cb7-878a-88afedb7983c"). InnerVolumeSpecName "kube-api-access-nxjzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.677771 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7vm5c" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.685304 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kthx9" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.690808 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mg56k" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.735407 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cbdb0ba-36d8-4cb7-878a-88afedb7983c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0cbdb0ba-36d8-4cb7-878a-88afedb7983c" (UID: "0cbdb0ba-36d8-4cb7-878a-88afedb7983c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.768864 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f901191e-752f-4cca-bf08-3274cf6a9254-marketplace-operator-metrics\") pod \"f901191e-752f-4cca-bf08-3274cf6a9254\" (UID: \"f901191e-752f-4cca-bf08-3274cf6a9254\") " Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.768920 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f901191e-752f-4cca-bf08-3274cf6a9254-marketplace-trusted-ca\") pod \"f901191e-752f-4cca-bf08-3274cf6a9254\" (UID: \"f901191e-752f-4cca-bf08-3274cf6a9254\") " Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.768975 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9846c\" (UniqueName: \"kubernetes.io/projected/f901191e-752f-4cca-bf08-3274cf6a9254-kube-api-access-9846c\") pod \"f901191e-752f-4cca-bf08-3274cf6a9254\" (UID: \"f901191e-752f-4cca-bf08-3274cf6a9254\") " Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.769161 4990 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cbdb0ba-36d8-4cb7-878a-88afedb7983c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.769204 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxjzd\" (UniqueName: \"kubernetes.io/projected/0cbdb0ba-36d8-4cb7-878a-88afedb7983c-kube-api-access-nxjzd\") on node \"crc\" DevicePath \"\"" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.769215 4990 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cbdb0ba-36d8-4cb7-878a-88afedb7983c-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.771154 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f901191e-752f-4cca-bf08-3274cf6a9254-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "f901191e-752f-4cca-bf08-3274cf6a9254" (UID: "f901191e-752f-4cca-bf08-3274cf6a9254"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.771395 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f901191e-752f-4cca-bf08-3274cf6a9254-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "f901191e-752f-4cca-bf08-3274cf6a9254" (UID: "f901191e-752f-4cca-bf08-3274cf6a9254"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.773979 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f901191e-752f-4cca-bf08-3274cf6a9254-kube-api-access-9846c" (OuterVolumeSpecName: "kube-api-access-9846c") pod "f901191e-752f-4cca-bf08-3274cf6a9254" (UID: "f901191e-752f-4cca-bf08-3274cf6a9254"). InnerVolumeSpecName "kube-api-access-9846c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.869911 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4decd32e-d179-4ec7-9ec0-c8744ef37b47-utilities\") pod \"4decd32e-d179-4ec7-9ec0-c8744ef37b47\" (UID: \"4decd32e-d179-4ec7-9ec0-c8744ef37b47\") " Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.870266 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df8138ec-8df1-4959-90dd-ee4a224c92f8-utilities\") pod \"df8138ec-8df1-4959-90dd-ee4a224c92f8\" (UID: \"df8138ec-8df1-4959-90dd-ee4a224c92f8\") " Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.870340 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df8138ec-8df1-4959-90dd-ee4a224c92f8-catalog-content\") pod \"df8138ec-8df1-4959-90dd-ee4a224c92f8\" (UID: \"df8138ec-8df1-4959-90dd-ee4a224c92f8\") " Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.870369 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvng4\" (UniqueName: \"kubernetes.io/projected/df8138ec-8df1-4959-90dd-ee4a224c92f8-kube-api-access-xvng4\") pod \"df8138ec-8df1-4959-90dd-ee4a224c92f8\" (UID: \"df8138ec-8df1-4959-90dd-ee4a224c92f8\") " Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.870391 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfnrz\" (UniqueName: \"kubernetes.io/projected/4decd32e-d179-4ec7-9ec0-c8744ef37b47-kube-api-access-hfnrz\") pod \"4decd32e-d179-4ec7-9ec0-c8744ef37b47\" (UID: \"4decd32e-d179-4ec7-9ec0-c8744ef37b47\") " Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.870424 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4decd32e-d179-4ec7-9ec0-c8744ef37b47-catalog-content\") pod \"4decd32e-d179-4ec7-9ec0-c8744ef37b47\" (UID: \"4decd32e-d179-4ec7-9ec0-c8744ef37b47\") " Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.870445 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhd2n\" (UniqueName: \"kubernetes.io/projected/4100dc4e-10a0-4d5c-b441-c87e80787d93-kube-api-access-xhd2n\") pod \"4100dc4e-10a0-4d5c-b441-c87e80787d93\" (UID: \"4100dc4e-10a0-4d5c-b441-c87e80787d93\") " Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.870498 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4100dc4e-10a0-4d5c-b441-c87e80787d93-catalog-content\") pod \"4100dc4e-10a0-4d5c-b441-c87e80787d93\" (UID: \"4100dc4e-10a0-4d5c-b441-c87e80787d93\") " Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.870544 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4100dc4e-10a0-4d5c-b441-c87e80787d93-utilities\") pod \"4100dc4e-10a0-4d5c-b441-c87e80787d93\" (UID: \"4100dc4e-10a0-4d5c-b441-c87e80787d93\") " Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.870692 4990 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f901191e-752f-4cca-bf08-3274cf6a9254-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.870707 4990 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f901191e-752f-4cca-bf08-3274cf6a9254-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.870719 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9846c\" (UniqueName: \"kubernetes.io/projected/f901191e-752f-4cca-bf08-3274cf6a9254-kube-api-access-9846c\") on node \"crc\" DevicePath \"\"" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.871423 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4100dc4e-10a0-4d5c-b441-c87e80787d93-utilities" (OuterVolumeSpecName: "utilities") pod "4100dc4e-10a0-4d5c-b441-c87e80787d93" (UID: "4100dc4e-10a0-4d5c-b441-c87e80787d93"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.871924 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df8138ec-8df1-4959-90dd-ee4a224c92f8-utilities" (OuterVolumeSpecName: "utilities") pod "df8138ec-8df1-4959-90dd-ee4a224c92f8" (UID: "df8138ec-8df1-4959-90dd-ee4a224c92f8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.872781 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4decd32e-d179-4ec7-9ec0-c8744ef37b47-utilities" (OuterVolumeSpecName: "utilities") pod "4decd32e-d179-4ec7-9ec0-c8744ef37b47" (UID: "4decd32e-d179-4ec7-9ec0-c8744ef37b47"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.874229 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df8138ec-8df1-4959-90dd-ee4a224c92f8-kube-api-access-xvng4" (OuterVolumeSpecName: "kube-api-access-xvng4") pod "df8138ec-8df1-4959-90dd-ee4a224c92f8" (UID: "df8138ec-8df1-4959-90dd-ee4a224c92f8"). InnerVolumeSpecName "kube-api-access-xvng4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.874580 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4decd32e-d179-4ec7-9ec0-c8744ef37b47-kube-api-access-hfnrz" (OuterVolumeSpecName: "kube-api-access-hfnrz") pod "4decd32e-d179-4ec7-9ec0-c8744ef37b47" (UID: "4decd32e-d179-4ec7-9ec0-c8744ef37b47"). InnerVolumeSpecName "kube-api-access-hfnrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.874965 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4100dc4e-10a0-4d5c-b441-c87e80787d93-kube-api-access-xhd2n" (OuterVolumeSpecName: "kube-api-access-xhd2n") pod "4100dc4e-10a0-4d5c-b441-c87e80787d93" (UID: "4100dc4e-10a0-4d5c-b441-c87e80787d93"). InnerVolumeSpecName "kube-api-access-xhd2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.891897 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df8138ec-8df1-4959-90dd-ee4a224c92f8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df8138ec-8df1-4959-90dd-ee4a224c92f8" (UID: "df8138ec-8df1-4959-90dd-ee4a224c92f8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.938708 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4100dc4e-10a0-4d5c-b441-c87e80787d93-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4100dc4e-10a0-4d5c-b441-c87e80787d93" (UID: "4100dc4e-10a0-4d5c-b441-c87e80787d93"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.972272 4990 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df8138ec-8df1-4959-90dd-ee4a224c92f8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.972351 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvng4\" (UniqueName: \"kubernetes.io/projected/df8138ec-8df1-4959-90dd-ee4a224c92f8-kube-api-access-xvng4\") on node \"crc\" DevicePath \"\"" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.972373 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfnrz\" (UniqueName: \"kubernetes.io/projected/4decd32e-d179-4ec7-9ec0-c8744ef37b47-kube-api-access-hfnrz\") on node \"crc\" DevicePath \"\"" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.972423 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhd2n\" (UniqueName: \"kubernetes.io/projected/4100dc4e-10a0-4d5c-b441-c87e80787d93-kube-api-access-xhd2n\") on node \"crc\" DevicePath \"\"" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.972440 4990 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4100dc4e-10a0-4d5c-b441-c87e80787d93-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.972456 4990 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4100dc4e-10a0-4d5c-b441-c87e80787d93-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.972536 4990 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4decd32e-d179-4ec7-9ec0-c8744ef37b47-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.972555 4990 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df8138ec-8df1-4959-90dd-ee4a224c92f8-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 01:15:46 crc kubenswrapper[4990]: I1205 01:15:46.974539 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lz2ns"] Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.006634 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4decd32e-d179-4ec7-9ec0-c8744ef37b47-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4decd32e-d179-4ec7-9ec0-c8744ef37b47" (UID: "4decd32e-d179-4ec7-9ec0-c8744ef37b47"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.073397 4990 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4decd32e-d179-4ec7-9ec0-c8744ef37b47-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.278536 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7vm5c" Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.278560 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7vm5c" event={"ID":"4100dc4e-10a0-4d5c-b441-c87e80787d93","Type":"ContainerDied","Data":"4a1a53129bfa46d7e87653d6a41d7328debcac016a9e04af1bbb0fbb30320523"} Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.278649 4990 scope.go:117] "RemoveContainer" containerID="08d6a3ae4ea9ee88069928b52717f6104548705c0345e1007388a53a2c7c0a26" Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.281563 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kthx9" event={"ID":"4decd32e-d179-4ec7-9ec0-c8744ef37b47","Type":"ContainerDied","Data":"d5009259f58570980a3721480a4bdf953e357717f098e62e57cc9401a809c3f8"} Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.281616 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kthx9" Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.298835 4990 scope.go:117] "RemoveContainer" containerID="20ffe511e3ade39ea7c0c851875c2eab18afb0409c70fc529ac06613d6858d65" Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.301401 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mg56k" event={"ID":"df8138ec-8df1-4959-90dd-ee4a224c92f8","Type":"ContainerDied","Data":"b00d7fc412a9cc8160e5cca60fc2aee9699c0e3eaa0cd547453a3e6d939cb19c"} Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.301625 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mg56k" Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.307880 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nkp9t" Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.308012 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nkp9t" event={"ID":"f901191e-752f-4cca-bf08-3274cf6a9254","Type":"ContainerDied","Data":"6906712a05dce9e2ad9a3d6f22a0b3962f422afab1805df834b9cb0438cecf22"} Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.317247 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrngg" event={"ID":"0cbdb0ba-36d8-4cb7-878a-88afedb7983c","Type":"ContainerDied","Data":"f87e419ffd7b1e2e8fdbe1a93c2880b3dd50763e358a3baab0cbf952dd2c8d77"} Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.317416 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rrngg" Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.324883 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7vm5c"] Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.329558 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lz2ns" event={"ID":"24357b8a-a4f6-43dd-ac9f-d563fa8762d4","Type":"ContainerStarted","Data":"3fc9dbd9a2f054974acf9d4711b9a5155e10092fa604fa62c23a1deb1da1aff8"} Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.329596 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lz2ns" event={"ID":"24357b8a-a4f6-43dd-ac9f-d563fa8762d4","Type":"ContainerStarted","Data":"a243388b75a92fcaad9e2b913c2e5abb250924d655b8b9beb3fe8034ff4cd09f"} Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.329878 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7vm5c"] Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.330062 4990 scope.go:117] "RemoveContainer" containerID="c40e2a70ed55c200430ea909fd7cef4a34ccca28247412f2d190bd5585914cc1" Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.330384 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-lz2ns" Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.332192 4990 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-lz2ns container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.72:8080/healthz\": dial tcp 10.217.0.72:8080: connect: connection refused" start-of-body= Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.332246 4990 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-lz2ns" podUID="24357b8a-a4f6-43dd-ac9f-d563fa8762d4" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.72:8080/healthz\": dial tcp 10.217.0.72:8080: connect: connection refused" Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.351095 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kthx9"] Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.354288 4990 scope.go:117] "RemoveContainer" containerID="76b2b2e6f015dd6a26d82cab2e56c758000785d4eac43508089edd88802dc725" Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.356885 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kthx9"] Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.366821 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-lz2ns" podStartSLOduration=1.366803392 podStartE2EDuration="1.366803392s" podCreationTimestamp="2025-12-05 01:15:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:15:47.365535096 +0000 UTC m=+445.741750467" watchObservedRunningTime="2025-12-05 01:15:47.366803392 +0000 UTC m=+445.743018773" Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.385900 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rrngg"] Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.388392 4990 scope.go:117] "RemoveContainer" containerID="c1696983f31e76f0a8033f334f4d25d312922b470c6cf2f6b4af02f51a8614e4" Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.390848 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rrngg"] Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.395609 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mg56k"] Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.399656 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mg56k"] Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.403187 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nkp9t"] Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.410728 4990 scope.go:117] "RemoveContainer" containerID="1971f250d5a59bfa136190e9be00ed62a1cdf70d28349122d200826389c6ec8f" Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.413950 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nkp9t"] Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.428808 4990 scope.go:117] "RemoveContainer" containerID="c10460cca96f632974506a9ca70c7b67451ba042fbe8699b563fab9336ca1b67" Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.444045 4990 scope.go:117] "RemoveContainer" containerID="b13a26c2d1722c83a15d32fd9f2886f97517535b530f0c9b2e152e1de3a2f9a3" Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.461306 4990 scope.go:117] "RemoveContainer" containerID="89008d52778aae1cfb28ded053f1faf57b2052ac2fbe5af7e123ecc29c27839c" Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.474176 4990 scope.go:117] "RemoveContainer" containerID="a17f708f9ab3738b1d4a6f860a5814b4938a31c15609e34442334d78ade5b5dd" Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.484439 4990 scope.go:117] "RemoveContainer" containerID="6c688b54455cfd946ea7f7044077f20d8b6e7a814384c36313fff951e295fff0" Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.495529 4990 scope.go:117] "RemoveContainer" containerID="ae63c5afeb153d2bab678fc1d932364a8bc090665816bac249d450af15e1d227" Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.510463 4990 scope.go:117] "RemoveContainer" containerID="b6d7fb0fc8ac0eed02944a68148d7de2dc9c07a6d130fe8f7d912c29d0562a4c" Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.939058 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cbdb0ba-36d8-4cb7-878a-88afedb7983c" path="/var/lib/kubelet/pods/0cbdb0ba-36d8-4cb7-878a-88afedb7983c/volumes" Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.940454 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4100dc4e-10a0-4d5c-b441-c87e80787d93" path="/var/lib/kubelet/pods/4100dc4e-10a0-4d5c-b441-c87e80787d93/volumes" Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.941209 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4decd32e-d179-4ec7-9ec0-c8744ef37b47" path="/var/lib/kubelet/pods/4decd32e-d179-4ec7-9ec0-c8744ef37b47/volumes" Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.942550 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df8138ec-8df1-4959-90dd-ee4a224c92f8" path="/var/lib/kubelet/pods/df8138ec-8df1-4959-90dd-ee4a224c92f8/volumes" Dec 05 01:15:47 crc kubenswrapper[4990]: I1205 01:15:47.943393 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f901191e-752f-4cca-bf08-3274cf6a9254" path="/var/lib/kubelet/pods/f901191e-752f-4cca-bf08-3274cf6a9254/volumes" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.238426 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lmm8t"] Dec 05 01:15:48 crc kubenswrapper[4990]: E1205 01:15:48.238679 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4decd32e-d179-4ec7-9ec0-c8744ef37b47" containerName="extract-content" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.238693 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="4decd32e-d179-4ec7-9ec0-c8744ef37b47" containerName="extract-content" Dec 05 01:15:48 crc kubenswrapper[4990]: E1205 01:15:48.238704 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4decd32e-d179-4ec7-9ec0-c8744ef37b47" containerName="extract-utilities" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.238713 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="4decd32e-d179-4ec7-9ec0-c8744ef37b47" containerName="extract-utilities" Dec 05 01:15:48 crc kubenswrapper[4990]: E1205 01:15:48.238724 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cbdb0ba-36d8-4cb7-878a-88afedb7983c" containerName="extract-utilities" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.238734 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cbdb0ba-36d8-4cb7-878a-88afedb7983c" containerName="extract-utilities" Dec 05 01:15:48 crc kubenswrapper[4990]: E1205 01:15:48.238746 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4100dc4e-10a0-4d5c-b441-c87e80787d93" containerName="extract-utilities" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.238754 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="4100dc4e-10a0-4d5c-b441-c87e80787d93" containerName="extract-utilities" Dec 05 01:15:48 crc kubenswrapper[4990]: E1205 01:15:48.238767 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4100dc4e-10a0-4d5c-b441-c87e80787d93" containerName="extract-content" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.238775 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="4100dc4e-10a0-4d5c-b441-c87e80787d93" containerName="extract-content" Dec 05 01:15:48 crc kubenswrapper[4990]: E1205 01:15:48.238789 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df8138ec-8df1-4959-90dd-ee4a224c92f8" containerName="extract-utilities" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.238799 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="df8138ec-8df1-4959-90dd-ee4a224c92f8" containerName="extract-utilities" Dec 05 01:15:48 crc kubenswrapper[4990]: E1205 01:15:48.238809 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cbdb0ba-36d8-4cb7-878a-88afedb7983c" containerName="registry-server" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.238817 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cbdb0ba-36d8-4cb7-878a-88afedb7983c" containerName="registry-server" Dec 05 01:15:48 crc kubenswrapper[4990]: E1205 01:15:48.238831 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cbdb0ba-36d8-4cb7-878a-88afedb7983c" containerName="extract-content" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.238840 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cbdb0ba-36d8-4cb7-878a-88afedb7983c" containerName="extract-content" Dec 05 01:15:48 crc kubenswrapper[4990]: E1205 01:15:48.238857 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df8138ec-8df1-4959-90dd-ee4a224c92f8" containerName="registry-server" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.238865 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="df8138ec-8df1-4959-90dd-ee4a224c92f8" containerName="registry-server" Dec 05 01:15:48 crc kubenswrapper[4990]: E1205 01:15:48.238874 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df8138ec-8df1-4959-90dd-ee4a224c92f8" containerName="extract-content" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.238881 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="df8138ec-8df1-4959-90dd-ee4a224c92f8" containerName="extract-content" Dec 05 01:15:48 crc kubenswrapper[4990]: E1205 01:15:48.238891 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f901191e-752f-4cca-bf08-3274cf6a9254" containerName="marketplace-operator" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.238899 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="f901191e-752f-4cca-bf08-3274cf6a9254" containerName="marketplace-operator" Dec 05 01:15:48 crc kubenswrapper[4990]: E1205 01:15:48.238910 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4100dc4e-10a0-4d5c-b441-c87e80787d93" containerName="registry-server" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.238918 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="4100dc4e-10a0-4d5c-b441-c87e80787d93" containerName="registry-server" Dec 05 01:15:48 crc kubenswrapper[4990]: E1205 01:15:48.238929 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4decd32e-d179-4ec7-9ec0-c8744ef37b47" containerName="registry-server" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.238937 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="4decd32e-d179-4ec7-9ec0-c8744ef37b47" containerName="registry-server" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.239043 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="4100dc4e-10a0-4d5c-b441-c87e80787d93" containerName="registry-server" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.239061 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="4decd32e-d179-4ec7-9ec0-c8744ef37b47" containerName="registry-server" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.239072 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="f901191e-752f-4cca-bf08-3274cf6a9254" containerName="marketplace-operator" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.239087 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="df8138ec-8df1-4959-90dd-ee4a224c92f8" containerName="registry-server" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.239098 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cbdb0ba-36d8-4cb7-878a-88afedb7983c" containerName="registry-server" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.239937 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lmm8t" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.241778 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.250268 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lmm8t"] Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.343726 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-lz2ns" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.387755 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk57l\" (UniqueName: \"kubernetes.io/projected/a65b7b02-ae3b-432a-815e-f38c1b2beb4d-kube-api-access-lk57l\") pod \"redhat-marketplace-lmm8t\" (UID: \"a65b7b02-ae3b-432a-815e-f38c1b2beb4d\") " pod="openshift-marketplace/redhat-marketplace-lmm8t" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.387871 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a65b7b02-ae3b-432a-815e-f38c1b2beb4d-utilities\") pod \"redhat-marketplace-lmm8t\" (UID: \"a65b7b02-ae3b-432a-815e-f38c1b2beb4d\") " pod="openshift-marketplace/redhat-marketplace-lmm8t" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.388048 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a65b7b02-ae3b-432a-815e-f38c1b2beb4d-catalog-content\") pod \"redhat-marketplace-lmm8t\" (UID: \"a65b7b02-ae3b-432a-815e-f38c1b2beb4d\") " pod="openshift-marketplace/redhat-marketplace-lmm8t" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.441006 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5wwxs"] Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.442293 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5wwxs" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.445588 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.452422 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5wwxs"] Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.489175 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a65b7b02-ae3b-432a-815e-f38c1b2beb4d-catalog-content\") pod \"redhat-marketplace-lmm8t\" (UID: \"a65b7b02-ae3b-432a-815e-f38c1b2beb4d\") " pod="openshift-marketplace/redhat-marketplace-lmm8t" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.489227 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk57l\" (UniqueName: \"kubernetes.io/projected/a65b7b02-ae3b-432a-815e-f38c1b2beb4d-kube-api-access-lk57l\") pod \"redhat-marketplace-lmm8t\" (UID: \"a65b7b02-ae3b-432a-815e-f38c1b2beb4d\") " pod="openshift-marketplace/redhat-marketplace-lmm8t" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.489371 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a65b7b02-ae3b-432a-815e-f38c1b2beb4d-utilities\") pod \"redhat-marketplace-lmm8t\" (UID: \"a65b7b02-ae3b-432a-815e-f38c1b2beb4d\") " pod="openshift-marketplace/redhat-marketplace-lmm8t" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.489668 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a65b7b02-ae3b-432a-815e-f38c1b2beb4d-catalog-content\") pod \"redhat-marketplace-lmm8t\" (UID: \"a65b7b02-ae3b-432a-815e-f38c1b2beb4d\") " pod="openshift-marketplace/redhat-marketplace-lmm8t" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.490127 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a65b7b02-ae3b-432a-815e-f38c1b2beb4d-utilities\") pod \"redhat-marketplace-lmm8t\" (UID: \"a65b7b02-ae3b-432a-815e-f38c1b2beb4d\") " pod="openshift-marketplace/redhat-marketplace-lmm8t" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.506651 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk57l\" (UniqueName: \"kubernetes.io/projected/a65b7b02-ae3b-432a-815e-f38c1b2beb4d-kube-api-access-lk57l\") pod \"redhat-marketplace-lmm8t\" (UID: \"a65b7b02-ae3b-432a-815e-f38c1b2beb4d\") " pod="openshift-marketplace/redhat-marketplace-lmm8t" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.552692 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lmm8t" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.591104 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3336377a-d1c7-4c90-a824-63aa62dd945c-utilities\") pod \"redhat-operators-5wwxs\" (UID: \"3336377a-d1c7-4c90-a824-63aa62dd945c\") " pod="openshift-marketplace/redhat-operators-5wwxs" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.591368 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3336377a-d1c7-4c90-a824-63aa62dd945c-catalog-content\") pod \"redhat-operators-5wwxs\" (UID: \"3336377a-d1c7-4c90-a824-63aa62dd945c\") " pod="openshift-marketplace/redhat-operators-5wwxs" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.591430 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-484kx\" (UniqueName: \"kubernetes.io/projected/3336377a-d1c7-4c90-a824-63aa62dd945c-kube-api-access-484kx\") pod \"redhat-operators-5wwxs\" (UID: \"3336377a-d1c7-4c90-a824-63aa62dd945c\") " pod="openshift-marketplace/redhat-operators-5wwxs" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.692398 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3336377a-d1c7-4c90-a824-63aa62dd945c-catalog-content\") pod \"redhat-operators-5wwxs\" (UID: \"3336377a-d1c7-4c90-a824-63aa62dd945c\") " pod="openshift-marketplace/redhat-operators-5wwxs" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.693136 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-484kx\" (UniqueName: \"kubernetes.io/projected/3336377a-d1c7-4c90-a824-63aa62dd945c-kube-api-access-484kx\") pod \"redhat-operators-5wwxs\" (UID: \"3336377a-d1c7-4c90-a824-63aa62dd945c\") " pod="openshift-marketplace/redhat-operators-5wwxs" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.693244 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3336377a-d1c7-4c90-a824-63aa62dd945c-utilities\") pod \"redhat-operators-5wwxs\" (UID: \"3336377a-d1c7-4c90-a824-63aa62dd945c\") " pod="openshift-marketplace/redhat-operators-5wwxs" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.693258 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3336377a-d1c7-4c90-a824-63aa62dd945c-catalog-content\") pod \"redhat-operators-5wwxs\" (UID: \"3336377a-d1c7-4c90-a824-63aa62dd945c\") " pod="openshift-marketplace/redhat-operators-5wwxs" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.693598 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3336377a-d1c7-4c90-a824-63aa62dd945c-utilities\") pod \"redhat-operators-5wwxs\" (UID: \"3336377a-d1c7-4c90-a824-63aa62dd945c\") " pod="openshift-marketplace/redhat-operators-5wwxs" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.712284 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-484kx\" (UniqueName: \"kubernetes.io/projected/3336377a-d1c7-4c90-a824-63aa62dd945c-kube-api-access-484kx\") pod \"redhat-operators-5wwxs\" (UID: \"3336377a-d1c7-4c90-a824-63aa62dd945c\") " pod="openshift-marketplace/redhat-operators-5wwxs" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.758990 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5wwxs" Dec 05 01:15:48 crc kubenswrapper[4990]: I1205 01:15:48.979707 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lmm8t"] Dec 05 01:15:49 crc kubenswrapper[4990]: I1205 01:15:49.138642 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5wwxs"] Dec 05 01:15:49 crc kubenswrapper[4990]: W1205 01:15:49.172442 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3336377a_d1c7_4c90_a824_63aa62dd945c.slice/crio-a454066518785a53a45c16e01729daebc0cc20125a39d418739bd463432c4d68 WatchSource:0}: Error finding container a454066518785a53a45c16e01729daebc0cc20125a39d418739bd463432c4d68: Status 404 returned error can't find the container with id a454066518785a53a45c16e01729daebc0cc20125a39d418739bd463432c4d68 Dec 05 01:15:49 crc kubenswrapper[4990]: I1205 01:15:49.350344 4990 generic.go:334] "Generic (PLEG): container finished" podID="3336377a-d1c7-4c90-a824-63aa62dd945c" containerID="5bd3be446a120c8624de51dcee63f5adac69ecbcc5637bce99159455959cd82d" exitCode=0 Dec 05 01:15:49 crc kubenswrapper[4990]: I1205 01:15:49.350438 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5wwxs" event={"ID":"3336377a-d1c7-4c90-a824-63aa62dd945c","Type":"ContainerDied","Data":"5bd3be446a120c8624de51dcee63f5adac69ecbcc5637bce99159455959cd82d"} Dec 05 01:15:49 crc kubenswrapper[4990]: I1205 01:15:49.350522 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5wwxs" event={"ID":"3336377a-d1c7-4c90-a824-63aa62dd945c","Type":"ContainerStarted","Data":"a454066518785a53a45c16e01729daebc0cc20125a39d418739bd463432c4d68"} Dec 05 01:15:49 crc kubenswrapper[4990]: I1205 01:15:49.352473 4990 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 01:15:49 crc kubenswrapper[4990]: I1205 01:15:49.355005 4990 generic.go:334] "Generic (PLEG): container finished" podID="a65b7b02-ae3b-432a-815e-f38c1b2beb4d" containerID="482dd9283b2505549181291525145772b3b883358bc38c17eb475f94cf5a1fdf" exitCode=0 Dec 05 01:15:49 crc kubenswrapper[4990]: I1205 01:15:49.355114 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmm8t" event={"ID":"a65b7b02-ae3b-432a-815e-f38c1b2beb4d","Type":"ContainerDied","Data":"482dd9283b2505549181291525145772b3b883358bc38c17eb475f94cf5a1fdf"} Dec 05 01:15:49 crc kubenswrapper[4990]: I1205 01:15:49.355159 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmm8t" event={"ID":"a65b7b02-ae3b-432a-815e-f38c1b2beb4d","Type":"ContainerStarted","Data":"44dd0872d4f246e7f45a74d2b26484fe5f3ea7bf829f756cf3792832005e835d"} Dec 05 01:15:50 crc kubenswrapper[4990]: I1205 01:15:50.362672 4990 generic.go:334] "Generic (PLEG): container finished" podID="a65b7b02-ae3b-432a-815e-f38c1b2beb4d" containerID="0a1b280daade869fc67ab634a65d1726fe6a67529d523a24fd94df31e3bc20b5" exitCode=0 Dec 05 01:15:50 crc kubenswrapper[4990]: I1205 01:15:50.362822 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmm8t" event={"ID":"a65b7b02-ae3b-432a-815e-f38c1b2beb4d","Type":"ContainerDied","Data":"0a1b280daade869fc67ab634a65d1726fe6a67529d523a24fd94df31e3bc20b5"} Dec 05 01:15:50 crc kubenswrapper[4990]: I1205 01:15:50.368002 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5wwxs" event={"ID":"3336377a-d1c7-4c90-a824-63aa62dd945c","Type":"ContainerStarted","Data":"52bfb34b634022f30f57b4ef1461fe4b12f2ec8b853908a9e3ee1385d40fba99"} Dec 05 01:15:50 crc kubenswrapper[4990]: I1205 01:15:50.640884 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-whvp8"] Dec 05 01:15:50 crc kubenswrapper[4990]: I1205 01:15:50.643137 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-whvp8" Dec 05 01:15:50 crc kubenswrapper[4990]: I1205 01:15:50.646468 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 05 01:15:50 crc kubenswrapper[4990]: I1205 01:15:50.658742 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-whvp8"] Dec 05 01:15:50 crc kubenswrapper[4990]: I1205 01:15:50.729192 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a7d8297-4858-4ff7-ac43-3ee1771383a8-catalog-content\") pod \"community-operators-whvp8\" (UID: \"9a7d8297-4858-4ff7-ac43-3ee1771383a8\") " pod="openshift-marketplace/community-operators-whvp8" Dec 05 01:15:50 crc kubenswrapper[4990]: I1205 01:15:50.729467 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmd2j\" (UniqueName: \"kubernetes.io/projected/9a7d8297-4858-4ff7-ac43-3ee1771383a8-kube-api-access-tmd2j\") pod \"community-operators-whvp8\" (UID: \"9a7d8297-4858-4ff7-ac43-3ee1771383a8\") " pod="openshift-marketplace/community-operators-whvp8" Dec 05 01:15:50 crc kubenswrapper[4990]: I1205 01:15:50.729513 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a7d8297-4858-4ff7-ac43-3ee1771383a8-utilities\") pod \"community-operators-whvp8\" (UID: \"9a7d8297-4858-4ff7-ac43-3ee1771383a8\") " pod="openshift-marketplace/community-operators-whvp8" Dec 05 01:15:50 crc kubenswrapper[4990]: I1205 01:15:50.831931 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a7d8297-4858-4ff7-ac43-3ee1771383a8-catalog-content\") pod \"community-operators-whvp8\" (UID: \"9a7d8297-4858-4ff7-ac43-3ee1771383a8\") " pod="openshift-marketplace/community-operators-whvp8" Dec 05 01:15:50 crc kubenswrapper[4990]: I1205 01:15:50.832024 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmd2j\" (UniqueName: \"kubernetes.io/projected/9a7d8297-4858-4ff7-ac43-3ee1771383a8-kube-api-access-tmd2j\") pod \"community-operators-whvp8\" (UID: \"9a7d8297-4858-4ff7-ac43-3ee1771383a8\") " pod="openshift-marketplace/community-operators-whvp8" Dec 05 01:15:50 crc kubenswrapper[4990]: I1205 01:15:50.834153 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a7d8297-4858-4ff7-ac43-3ee1771383a8-catalog-content\") pod \"community-operators-whvp8\" (UID: \"9a7d8297-4858-4ff7-ac43-3ee1771383a8\") " pod="openshift-marketplace/community-operators-whvp8" Dec 05 01:15:50 crc kubenswrapper[4990]: I1205 01:15:50.833872 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a7d8297-4858-4ff7-ac43-3ee1771383a8-utilities\") pod \"community-operators-whvp8\" (UID: \"9a7d8297-4858-4ff7-ac43-3ee1771383a8\") " pod="openshift-marketplace/community-operators-whvp8" Dec 05 01:15:50 crc kubenswrapper[4990]: I1205 01:15:50.835224 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a7d8297-4858-4ff7-ac43-3ee1771383a8-utilities\") pod \"community-operators-whvp8\" (UID: \"9a7d8297-4858-4ff7-ac43-3ee1771383a8\") " pod="openshift-marketplace/community-operators-whvp8" Dec 05 01:15:50 crc kubenswrapper[4990]: I1205 01:15:50.844286 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cb787"] Dec 05 01:15:50 crc kubenswrapper[4990]: I1205 01:15:50.845681 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cb787" Dec 05 01:15:50 crc kubenswrapper[4990]: I1205 01:15:50.848260 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cb787"] Dec 05 01:15:50 crc kubenswrapper[4990]: I1205 01:15:50.850080 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 05 01:15:50 crc kubenswrapper[4990]: I1205 01:15:50.867173 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmd2j\" (UniqueName: \"kubernetes.io/projected/9a7d8297-4858-4ff7-ac43-3ee1771383a8-kube-api-access-tmd2j\") pod \"community-operators-whvp8\" (UID: \"9a7d8297-4858-4ff7-ac43-3ee1771383a8\") " pod="openshift-marketplace/community-operators-whvp8" Dec 05 01:15:50 crc kubenswrapper[4990]: I1205 01:15:50.936578 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddfc56ca-24fc-4541-8068-4185d01d16c1-catalog-content\") pod \"certified-operators-cb787\" (UID: \"ddfc56ca-24fc-4541-8068-4185d01d16c1\") " pod="openshift-marketplace/certified-operators-cb787" Dec 05 01:15:50 crc kubenswrapper[4990]: I1205 01:15:50.936616 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7t94\" (UniqueName: \"kubernetes.io/projected/ddfc56ca-24fc-4541-8068-4185d01d16c1-kube-api-access-p7t94\") pod \"certified-operators-cb787\" (UID: \"ddfc56ca-24fc-4541-8068-4185d01d16c1\") " pod="openshift-marketplace/certified-operators-cb787" Dec 05 01:15:50 crc kubenswrapper[4990]: I1205 01:15:50.936683 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddfc56ca-24fc-4541-8068-4185d01d16c1-utilities\") pod \"certified-operators-cb787\" (UID: \"ddfc56ca-24fc-4541-8068-4185d01d16c1\") " pod="openshift-marketplace/certified-operators-cb787" Dec 05 01:15:50 crc kubenswrapper[4990]: I1205 01:15:50.974821 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-whvp8" Dec 05 01:15:51 crc kubenswrapper[4990]: I1205 01:15:51.037927 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddfc56ca-24fc-4541-8068-4185d01d16c1-utilities\") pod \"certified-operators-cb787\" (UID: \"ddfc56ca-24fc-4541-8068-4185d01d16c1\") " pod="openshift-marketplace/certified-operators-cb787" Dec 05 01:15:51 crc kubenswrapper[4990]: I1205 01:15:51.038307 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddfc56ca-24fc-4541-8068-4185d01d16c1-catalog-content\") pod \"certified-operators-cb787\" (UID: \"ddfc56ca-24fc-4541-8068-4185d01d16c1\") " pod="openshift-marketplace/certified-operators-cb787" Dec 05 01:15:51 crc kubenswrapper[4990]: I1205 01:15:51.038339 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7t94\" (UniqueName: \"kubernetes.io/projected/ddfc56ca-24fc-4541-8068-4185d01d16c1-kube-api-access-p7t94\") pod \"certified-operators-cb787\" (UID: \"ddfc56ca-24fc-4541-8068-4185d01d16c1\") " pod="openshift-marketplace/certified-operators-cb787" Dec 05 01:15:51 crc kubenswrapper[4990]: I1205 01:15:51.038533 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddfc56ca-24fc-4541-8068-4185d01d16c1-utilities\") pod \"certified-operators-cb787\" (UID: \"ddfc56ca-24fc-4541-8068-4185d01d16c1\") " pod="openshift-marketplace/certified-operators-cb787" Dec 05 01:15:51 crc kubenswrapper[4990]: I1205 01:15:51.038830 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddfc56ca-24fc-4541-8068-4185d01d16c1-catalog-content\") pod \"certified-operators-cb787\" (UID: \"ddfc56ca-24fc-4541-8068-4185d01d16c1\") " pod="openshift-marketplace/certified-operators-cb787" Dec 05 01:15:51 crc kubenswrapper[4990]: I1205 01:15:51.069348 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7t94\" (UniqueName: \"kubernetes.io/projected/ddfc56ca-24fc-4541-8068-4185d01d16c1-kube-api-access-p7t94\") pod \"certified-operators-cb787\" (UID: \"ddfc56ca-24fc-4541-8068-4185d01d16c1\") " pod="openshift-marketplace/certified-operators-cb787" Dec 05 01:15:51 crc kubenswrapper[4990]: I1205 01:15:51.224013 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cb787" Dec 05 01:15:51 crc kubenswrapper[4990]: I1205 01:15:51.369726 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-whvp8"] Dec 05 01:15:51 crc kubenswrapper[4990]: W1205 01:15:51.373056 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a7d8297_4858_4ff7_ac43_3ee1771383a8.slice/crio-0597458fe680337b1b4477d27c6fb91ada43eadd8f9258a089516965960cc7da WatchSource:0}: Error finding container 0597458fe680337b1b4477d27c6fb91ada43eadd8f9258a089516965960cc7da: Status 404 returned error can't find the container with id 0597458fe680337b1b4477d27c6fb91ada43eadd8f9258a089516965960cc7da Dec 05 01:15:51 crc kubenswrapper[4990]: I1205 01:15:51.379501 4990 generic.go:334] "Generic (PLEG): container finished" podID="3336377a-d1c7-4c90-a824-63aa62dd945c" containerID="52bfb34b634022f30f57b4ef1461fe4b12f2ec8b853908a9e3ee1385d40fba99" exitCode=0 Dec 05 01:15:51 crc kubenswrapper[4990]: I1205 01:15:51.379703 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5wwxs" event={"ID":"3336377a-d1c7-4c90-a824-63aa62dd945c","Type":"ContainerDied","Data":"52bfb34b634022f30f57b4ef1461fe4b12f2ec8b853908a9e3ee1385d40fba99"} Dec 05 01:15:51 crc kubenswrapper[4990]: I1205 01:15:51.386265 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmm8t" event={"ID":"a65b7b02-ae3b-432a-815e-f38c1b2beb4d","Type":"ContainerStarted","Data":"d23ce79c5c4ba6215554935b0d912ee4484e0f1f657634269444d9f42b9d2061"} Dec 05 01:15:51 crc kubenswrapper[4990]: I1205 01:15:51.954977 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lmm8t" podStartSLOduration=2.530160473 podStartE2EDuration="3.95495896s" podCreationTimestamp="2025-12-05 01:15:48 +0000 UTC" firstStartedPulling="2025-12-05 01:15:49.356572213 +0000 UTC m=+447.732787574" lastFinishedPulling="2025-12-05 01:15:50.78137069 +0000 UTC m=+449.157586061" observedRunningTime="2025-12-05 01:15:51.414437548 +0000 UTC m=+449.790652929" watchObservedRunningTime="2025-12-05 01:15:51.95495896 +0000 UTC m=+450.331174321" Dec 05 01:15:52 crc kubenswrapper[4990]: I1205 01:15:52.168825 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cb787"] Dec 05 01:15:52 crc kubenswrapper[4990]: W1205 01:15:52.179724 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddfc56ca_24fc_4541_8068_4185d01d16c1.slice/crio-f1c1ddc5137045041b8baea5e45dbc3e2631b328ce5c1e9208f1037501ede75f WatchSource:0}: Error finding container f1c1ddc5137045041b8baea5e45dbc3e2631b328ce5c1e9208f1037501ede75f: Status 404 returned error can't find the container with id f1c1ddc5137045041b8baea5e45dbc3e2631b328ce5c1e9208f1037501ede75f Dec 05 01:15:52 crc kubenswrapper[4990]: I1205 01:15:52.393242 4990 generic.go:334] "Generic (PLEG): container finished" podID="ddfc56ca-24fc-4541-8068-4185d01d16c1" containerID="bc40537ed51a3d440af6ef4c4b9b5e1a4266ee0446eb78a4a561124472e1a564" exitCode=0 Dec 05 01:15:52 crc kubenswrapper[4990]: I1205 01:15:52.393353 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cb787" event={"ID":"ddfc56ca-24fc-4541-8068-4185d01d16c1","Type":"ContainerDied","Data":"bc40537ed51a3d440af6ef4c4b9b5e1a4266ee0446eb78a4a561124472e1a564"} Dec 05 01:15:52 crc kubenswrapper[4990]: I1205 01:15:52.393414 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cb787" event={"ID":"ddfc56ca-24fc-4541-8068-4185d01d16c1","Type":"ContainerStarted","Data":"f1c1ddc5137045041b8baea5e45dbc3e2631b328ce5c1e9208f1037501ede75f"} Dec 05 01:15:52 crc kubenswrapper[4990]: I1205 01:15:52.396197 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5wwxs" event={"ID":"3336377a-d1c7-4c90-a824-63aa62dd945c","Type":"ContainerStarted","Data":"3edfb592d13c55746fb08eee5217323f140fc3c763c3cebfd7881410d65dca15"} Dec 05 01:15:52 crc kubenswrapper[4990]: I1205 01:15:52.398275 4990 generic.go:334] "Generic (PLEG): container finished" podID="9a7d8297-4858-4ff7-ac43-3ee1771383a8" containerID="155c632b70f593b2431eac8a23fe5578c1833ec6ed899b6078787bb439747418" exitCode=0 Dec 05 01:15:52 crc kubenswrapper[4990]: I1205 01:15:52.398323 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whvp8" event={"ID":"9a7d8297-4858-4ff7-ac43-3ee1771383a8","Type":"ContainerDied","Data":"155c632b70f593b2431eac8a23fe5578c1833ec6ed899b6078787bb439747418"} Dec 05 01:15:52 crc kubenswrapper[4990]: I1205 01:15:52.398345 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whvp8" event={"ID":"9a7d8297-4858-4ff7-ac43-3ee1771383a8","Type":"ContainerStarted","Data":"0597458fe680337b1b4477d27c6fb91ada43eadd8f9258a089516965960cc7da"} Dec 05 01:15:52 crc kubenswrapper[4990]: I1205 01:15:52.449224 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5wwxs" podStartSLOduration=1.789465399 podStartE2EDuration="4.449201626s" podCreationTimestamp="2025-12-05 01:15:48 +0000 UTC" firstStartedPulling="2025-12-05 01:15:49.352147316 +0000 UTC m=+447.728362687" lastFinishedPulling="2025-12-05 01:15:52.011883553 +0000 UTC m=+450.388098914" observedRunningTime="2025-12-05 01:15:52.445737617 +0000 UTC m=+450.821952988" watchObservedRunningTime="2025-12-05 01:15:52.449201626 +0000 UTC m=+450.825416997" Dec 05 01:15:53 crc kubenswrapper[4990]: I1205 01:15:53.405883 4990 generic.go:334] "Generic (PLEG): container finished" podID="ddfc56ca-24fc-4541-8068-4185d01d16c1" containerID="689f32f90fa16b13391a538e57eb462ea151c70ae90da020b8f64c2af2b7501a" exitCode=0 Dec 05 01:15:53 crc kubenswrapper[4990]: I1205 01:15:53.405925 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cb787" event={"ID":"ddfc56ca-24fc-4541-8068-4185d01d16c1","Type":"ContainerDied","Data":"689f32f90fa16b13391a538e57eb462ea151c70ae90da020b8f64c2af2b7501a"} Dec 05 01:15:53 crc kubenswrapper[4990]: I1205 01:15:53.408228 4990 generic.go:334] "Generic (PLEG): container finished" podID="9a7d8297-4858-4ff7-ac43-3ee1771383a8" containerID="62bee952bd94522028babece7fe4e2571ce955ef32e4218084ccd748d390b334" exitCode=0 Dec 05 01:15:53 crc kubenswrapper[4990]: I1205 01:15:53.409021 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whvp8" event={"ID":"9a7d8297-4858-4ff7-ac43-3ee1771383a8","Type":"ContainerDied","Data":"62bee952bd94522028babece7fe4e2571ce955ef32e4218084ccd748d390b334"} Dec 05 01:15:54 crc kubenswrapper[4990]: I1205 01:15:54.415334 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cb787" event={"ID":"ddfc56ca-24fc-4541-8068-4185d01d16c1","Type":"ContainerStarted","Data":"8df8a65b8376717e1585c4d015702a470d458b3b1d6cbcd311962927dcebe5f8"} Dec 05 01:15:54 crc kubenswrapper[4990]: I1205 01:15:54.419780 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whvp8" event={"ID":"9a7d8297-4858-4ff7-ac43-3ee1771383a8","Type":"ContainerStarted","Data":"f8e850d038b7d5e0f50213c7460375352ffd3429ab6a690b6c0861277afcaa39"} Dec 05 01:15:54 crc kubenswrapper[4990]: I1205 01:15:54.431273 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cb787" podStartSLOduration=3.029009396 podStartE2EDuration="4.431250296s" podCreationTimestamp="2025-12-05 01:15:50 +0000 UTC" firstStartedPulling="2025-12-05 01:15:52.394741394 +0000 UTC m=+450.770956745" lastFinishedPulling="2025-12-05 01:15:53.796982284 +0000 UTC m=+452.173197645" observedRunningTime="2025-12-05 01:15:54.429558348 +0000 UTC m=+452.805773709" watchObservedRunningTime="2025-12-05 01:15:54.431250296 +0000 UTC m=+452.807465657" Dec 05 01:15:54 crc kubenswrapper[4990]: I1205 01:15:54.453187 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-whvp8" podStartSLOduration=3.030805158 podStartE2EDuration="4.453171125s" podCreationTimestamp="2025-12-05 01:15:50 +0000 UTC" firstStartedPulling="2025-12-05 01:15:52.399579443 +0000 UTC m=+450.775794804" lastFinishedPulling="2025-12-05 01:15:53.82194541 +0000 UTC m=+452.198160771" observedRunningTime="2025-12-05 01:15:54.45124241 +0000 UTC m=+452.827457781" watchObservedRunningTime="2025-12-05 01:15:54.453171125 +0000 UTC m=+452.829386486" Dec 05 01:15:58 crc kubenswrapper[4990]: I1205 01:15:58.553519 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lmm8t" Dec 05 01:15:58 crc kubenswrapper[4990]: I1205 01:15:58.554523 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lmm8t" Dec 05 01:15:58 crc kubenswrapper[4990]: I1205 01:15:58.618961 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lmm8t" Dec 05 01:15:58 crc kubenswrapper[4990]: I1205 01:15:58.759971 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5wwxs" Dec 05 01:15:58 crc kubenswrapper[4990]: I1205 01:15:58.760239 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5wwxs" Dec 05 01:15:58 crc kubenswrapper[4990]: I1205 01:15:58.808375 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5wwxs" Dec 05 01:15:59 crc kubenswrapper[4990]: I1205 01:15:59.489582 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lmm8t" Dec 05 01:15:59 crc kubenswrapper[4990]: I1205 01:15:59.495647 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5wwxs" Dec 05 01:16:00 crc kubenswrapper[4990]: I1205 01:16:00.975664 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-whvp8" Dec 05 01:16:00 crc kubenswrapper[4990]: I1205 01:16:00.976010 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-whvp8" Dec 05 01:16:01 crc kubenswrapper[4990]: I1205 01:16:01.015495 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-whvp8" Dec 05 01:16:01 crc kubenswrapper[4990]: I1205 01:16:01.224810 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cb787" Dec 05 01:16:01 crc kubenswrapper[4990]: I1205 01:16:01.224893 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cb787" Dec 05 01:16:01 crc kubenswrapper[4990]: I1205 01:16:01.264555 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cb787" Dec 05 01:16:01 crc kubenswrapper[4990]: I1205 01:16:01.495817 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-whvp8" Dec 05 01:16:01 crc kubenswrapper[4990]: I1205 01:16:01.523415 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cb787" Dec 05 01:17:51 crc kubenswrapper[4990]: I1205 01:17:51.823869 4990 patch_prober.go:28] interesting pod/machine-config-daemon-zxlh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 01:17:51 crc kubenswrapper[4990]: I1205 01:17:51.824724 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 01:18:21 crc kubenswrapper[4990]: I1205 01:18:21.824057 4990 patch_prober.go:28] interesting pod/machine-config-daemon-zxlh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 01:18:21 crc kubenswrapper[4990]: I1205 01:18:21.824794 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 01:18:51 crc kubenswrapper[4990]: I1205 01:18:51.824409 4990 patch_prober.go:28] interesting pod/machine-config-daemon-zxlh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 01:18:51 crc kubenswrapper[4990]: I1205 01:18:51.825271 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 01:18:51 crc kubenswrapper[4990]: I1205 01:18:51.825346 4990 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" Dec 05 01:18:51 crc kubenswrapper[4990]: I1205 01:18:51.826153 4990 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0fff34981cf4773bd36204e463d90f40dceebdf614dbe550df05744f4f5aade7"} pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 01:18:51 crc kubenswrapper[4990]: I1205 01:18:51.826255 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" containerID="cri-o://0fff34981cf4773bd36204e463d90f40dceebdf614dbe550df05744f4f5aade7" gracePeriod=600 Dec 05 01:18:52 crc kubenswrapper[4990]: I1205 01:18:52.585413 4990 generic.go:334] "Generic (PLEG): container finished" podID="b6580a04-67de-48f9-9da2-56cb4377af48" containerID="0fff34981cf4773bd36204e463d90f40dceebdf614dbe550df05744f4f5aade7" exitCode=0 Dec 05 01:18:52 crc kubenswrapper[4990]: I1205 01:18:52.585940 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" event={"ID":"b6580a04-67de-48f9-9da2-56cb4377af48","Type":"ContainerDied","Data":"0fff34981cf4773bd36204e463d90f40dceebdf614dbe550df05744f4f5aade7"} Dec 05 01:18:52 crc kubenswrapper[4990]: I1205 01:18:52.585973 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" event={"ID":"b6580a04-67de-48f9-9da2-56cb4377af48","Type":"ContainerStarted","Data":"e80af07b88563f0c4908362eee70be4cc7b74f59335c734b90ad5639312c2fd8"} Dec 05 01:18:52 crc kubenswrapper[4990]: I1205 01:18:52.585993 4990 scope.go:117] "RemoveContainer" containerID="b40660c456587b8dd05170b85828fefb6a3f2c0ef31ac80948416bdfddfbcfec" Dec 05 01:19:13 crc kubenswrapper[4990]: I1205 01:19:13.708161 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jdc57"] Dec 05 01:19:13 crc kubenswrapper[4990]: I1205 01:19:13.709394 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jdc57" Dec 05 01:19:13 crc kubenswrapper[4990]: I1205 01:19:13.722831 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jdc57"] Dec 05 01:19:13 crc kubenswrapper[4990]: I1205 01:19:13.911446 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/79c1df86-b749-41b6-a9ea-ba5362c19853-registry-certificates\") pod \"image-registry-66df7c8f76-jdc57\" (UID: \"79c1df86-b749-41b6-a9ea-ba5362c19853\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdc57" Dec 05 01:19:13 crc kubenswrapper[4990]: I1205 01:19:13.911533 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79c1df86-b749-41b6-a9ea-ba5362c19853-trusted-ca\") pod \"image-registry-66df7c8f76-jdc57\" (UID: \"79c1df86-b749-41b6-a9ea-ba5362c19853\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdc57" Dec 05 01:19:13 crc kubenswrapper[4990]: I1205 01:19:13.911584 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/79c1df86-b749-41b6-a9ea-ba5362c19853-bound-sa-token\") pod \"image-registry-66df7c8f76-jdc57\" (UID: \"79c1df86-b749-41b6-a9ea-ba5362c19853\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdc57" Dec 05 01:19:13 crc kubenswrapper[4990]: I1205 01:19:13.911613 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d84w2\" (UniqueName: \"kubernetes.io/projected/79c1df86-b749-41b6-a9ea-ba5362c19853-kube-api-access-d84w2\") pod \"image-registry-66df7c8f76-jdc57\" (UID: \"79c1df86-b749-41b6-a9ea-ba5362c19853\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdc57" Dec 05 01:19:13 crc kubenswrapper[4990]: I1205 01:19:13.911690 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/79c1df86-b749-41b6-a9ea-ba5362c19853-registry-tls\") pod \"image-registry-66df7c8f76-jdc57\" (UID: \"79c1df86-b749-41b6-a9ea-ba5362c19853\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdc57" Dec 05 01:19:13 crc kubenswrapper[4990]: I1205 01:19:13.911729 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/79c1df86-b749-41b6-a9ea-ba5362c19853-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jdc57\" (UID: \"79c1df86-b749-41b6-a9ea-ba5362c19853\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdc57" Dec 05 01:19:13 crc kubenswrapper[4990]: I1205 01:19:13.911750 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/79c1df86-b749-41b6-a9ea-ba5362c19853-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jdc57\" (UID: \"79c1df86-b749-41b6-a9ea-ba5362c19853\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdc57" Dec 05 01:19:13 crc kubenswrapper[4990]: I1205 01:19:13.911789 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jdc57\" (UID: \"79c1df86-b749-41b6-a9ea-ba5362c19853\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdc57" Dec 05 01:19:13 crc kubenswrapper[4990]: I1205 01:19:13.932370 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jdc57\" (UID: \"79c1df86-b749-41b6-a9ea-ba5362c19853\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdc57" Dec 05 01:19:14 crc kubenswrapper[4990]: I1205 01:19:14.013165 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/79c1df86-b749-41b6-a9ea-ba5362c19853-registry-certificates\") pod \"image-registry-66df7c8f76-jdc57\" (UID: \"79c1df86-b749-41b6-a9ea-ba5362c19853\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdc57" Dec 05 01:19:14 crc kubenswrapper[4990]: I1205 01:19:14.013473 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79c1df86-b749-41b6-a9ea-ba5362c19853-trusted-ca\") pod \"image-registry-66df7c8f76-jdc57\" (UID: \"79c1df86-b749-41b6-a9ea-ba5362c19853\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdc57" Dec 05 01:19:14 crc kubenswrapper[4990]: I1205 01:19:14.013736 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/79c1df86-b749-41b6-a9ea-ba5362c19853-bound-sa-token\") pod \"image-registry-66df7c8f76-jdc57\" (UID: \"79c1df86-b749-41b6-a9ea-ba5362c19853\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdc57" Dec 05 01:19:14 crc kubenswrapper[4990]: I1205 01:19:14.013905 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d84w2\" (UniqueName: \"kubernetes.io/projected/79c1df86-b749-41b6-a9ea-ba5362c19853-kube-api-access-d84w2\") pod \"image-registry-66df7c8f76-jdc57\" (UID: \"79c1df86-b749-41b6-a9ea-ba5362c19853\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdc57" Dec 05 01:19:14 crc kubenswrapper[4990]: I1205 01:19:14.014105 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/79c1df86-b749-41b6-a9ea-ba5362c19853-registry-tls\") pod \"image-registry-66df7c8f76-jdc57\" (UID: \"79c1df86-b749-41b6-a9ea-ba5362c19853\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdc57" Dec 05 01:19:14 crc kubenswrapper[4990]: I1205 01:19:14.014270 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/79c1df86-b749-41b6-a9ea-ba5362c19853-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jdc57\" (UID: \"79c1df86-b749-41b6-a9ea-ba5362c19853\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdc57" Dec 05 01:19:14 crc kubenswrapper[4990]: I1205 01:19:14.014415 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/79c1df86-b749-41b6-a9ea-ba5362c19853-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jdc57\" (UID: \"79c1df86-b749-41b6-a9ea-ba5362c19853\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdc57" Dec 05 01:19:14 crc kubenswrapper[4990]: I1205 01:19:14.015275 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/79c1df86-b749-41b6-a9ea-ba5362c19853-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jdc57\" (UID: \"79c1df86-b749-41b6-a9ea-ba5362c19853\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdc57" Dec 05 01:19:14 crc kubenswrapper[4990]: I1205 01:19:14.015869 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79c1df86-b749-41b6-a9ea-ba5362c19853-trusted-ca\") pod \"image-registry-66df7c8f76-jdc57\" (UID: \"79c1df86-b749-41b6-a9ea-ba5362c19853\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdc57" Dec 05 01:19:14 crc kubenswrapper[4990]: I1205 01:19:14.016152 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/79c1df86-b749-41b6-a9ea-ba5362c19853-registry-certificates\") pod \"image-registry-66df7c8f76-jdc57\" (UID: \"79c1df86-b749-41b6-a9ea-ba5362c19853\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdc57" Dec 05 01:19:14 crc kubenswrapper[4990]: I1205 01:19:14.021704 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/79c1df86-b749-41b6-a9ea-ba5362c19853-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jdc57\" (UID: \"79c1df86-b749-41b6-a9ea-ba5362c19853\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdc57" Dec 05 01:19:14 crc kubenswrapper[4990]: I1205 01:19:14.022852 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/79c1df86-b749-41b6-a9ea-ba5362c19853-registry-tls\") pod \"image-registry-66df7c8f76-jdc57\" (UID: \"79c1df86-b749-41b6-a9ea-ba5362c19853\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdc57" Dec 05 01:19:14 crc kubenswrapper[4990]: I1205 01:19:14.044740 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/79c1df86-b749-41b6-a9ea-ba5362c19853-bound-sa-token\") pod \"image-registry-66df7c8f76-jdc57\" (UID: \"79c1df86-b749-41b6-a9ea-ba5362c19853\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdc57" Dec 05 01:19:14 crc kubenswrapper[4990]: I1205 01:19:14.048545 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d84w2\" (UniqueName: \"kubernetes.io/projected/79c1df86-b749-41b6-a9ea-ba5362c19853-kube-api-access-d84w2\") pod \"image-registry-66df7c8f76-jdc57\" (UID: \"79c1df86-b749-41b6-a9ea-ba5362c19853\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdc57" Dec 05 01:19:14 crc kubenswrapper[4990]: I1205 01:19:14.329378 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jdc57" Dec 05 01:19:14 crc kubenswrapper[4990]: I1205 01:19:14.581826 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jdc57"] Dec 05 01:19:14 crc kubenswrapper[4990]: I1205 01:19:14.737306 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jdc57" event={"ID":"79c1df86-b749-41b6-a9ea-ba5362c19853","Type":"ContainerStarted","Data":"94996010833db9dd7c6d9902cc8bd412fdf1fbb9902c7a6899e035ca8f5bc414"} Dec 05 01:19:15 crc kubenswrapper[4990]: I1205 01:19:15.749461 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jdc57" event={"ID":"79c1df86-b749-41b6-a9ea-ba5362c19853","Type":"ContainerStarted","Data":"b2930f3216ad7872e02534ebc7a00dff0e520cd139160e33b1d197f8bb63564d"} Dec 05 01:19:15 crc kubenswrapper[4990]: I1205 01:19:15.750395 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-jdc57" Dec 05 01:19:15 crc kubenswrapper[4990]: I1205 01:19:15.781551 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-jdc57" podStartSLOduration=2.781511403 podStartE2EDuration="2.781511403s" podCreationTimestamp="2025-12-05 01:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:19:15.776209542 +0000 UTC m=+654.152424963" watchObservedRunningTime="2025-12-05 01:19:15.781511403 +0000 UTC m=+654.157726774" Dec 05 01:19:34 crc kubenswrapper[4990]: I1205 01:19:34.336946 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-jdc57" Dec 05 01:19:34 crc kubenswrapper[4990]: I1205 01:19:34.422581 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vchzm"] Dec 05 01:19:59 crc kubenswrapper[4990]: I1205 01:19:59.478006 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" podUID="fde7ef59-700e-49a8-87f5-eac2580a1a54" containerName="registry" containerID="cri-o://a2545ac0fd01f29409d1d6dea3d30b94fadae0ef0f02c5615a79548cf7fa997b" gracePeriod=30 Dec 05 01:19:59 crc kubenswrapper[4990]: I1205 01:19:59.930236 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:20:00 crc kubenswrapper[4990]: I1205 01:20:00.039862 4990 generic.go:334] "Generic (PLEG): container finished" podID="fde7ef59-700e-49a8-87f5-eac2580a1a54" containerID="a2545ac0fd01f29409d1d6dea3d30b94fadae0ef0f02c5615a79548cf7fa997b" exitCode=0 Dec 05 01:20:00 crc kubenswrapper[4990]: I1205 01:20:00.039926 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" event={"ID":"fde7ef59-700e-49a8-87f5-eac2580a1a54","Type":"ContainerDied","Data":"a2545ac0fd01f29409d1d6dea3d30b94fadae0ef0f02c5615a79548cf7fa997b"} Dec 05 01:20:00 crc kubenswrapper[4990]: I1205 01:20:00.039948 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" Dec 05 01:20:00 crc kubenswrapper[4990]: I1205 01:20:00.040037 4990 scope.go:117] "RemoveContainer" containerID="a2545ac0fd01f29409d1d6dea3d30b94fadae0ef0f02c5615a79548cf7fa997b" Dec 05 01:20:00 crc kubenswrapper[4990]: I1205 01:20:00.040014 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vchzm" event={"ID":"fde7ef59-700e-49a8-87f5-eac2580a1a54","Type":"ContainerDied","Data":"6ef3034794a240df575c9064a25c428b6cba214c2cafec0d6bfade89a9559a7e"} Dec 05 01:20:00 crc kubenswrapper[4990]: I1205 01:20:00.059710 4990 scope.go:117] "RemoveContainer" containerID="a2545ac0fd01f29409d1d6dea3d30b94fadae0ef0f02c5615a79548cf7fa997b" Dec 05 01:20:00 crc kubenswrapper[4990]: E1205 01:20:00.060536 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2545ac0fd01f29409d1d6dea3d30b94fadae0ef0f02c5615a79548cf7fa997b\": container with ID starting with a2545ac0fd01f29409d1d6dea3d30b94fadae0ef0f02c5615a79548cf7fa997b not found: ID does not exist" containerID="a2545ac0fd01f29409d1d6dea3d30b94fadae0ef0f02c5615a79548cf7fa997b" Dec 05 01:20:00 crc kubenswrapper[4990]: I1205 01:20:00.060610 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2545ac0fd01f29409d1d6dea3d30b94fadae0ef0f02c5615a79548cf7fa997b"} err="failed to get container status \"a2545ac0fd01f29409d1d6dea3d30b94fadae0ef0f02c5615a79548cf7fa997b\": rpc error: code = NotFound desc = could not find container \"a2545ac0fd01f29409d1d6dea3d30b94fadae0ef0f02c5615a79548cf7fa997b\": container with ID starting with a2545ac0fd01f29409d1d6dea3d30b94fadae0ef0f02c5615a79548cf7fa997b not found: ID does not exist" Dec 05 01:20:00 crc kubenswrapper[4990]: I1205 01:20:00.094205 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fde7ef59-700e-49a8-87f5-eac2580a1a54-registry-certificates\") pod \"fde7ef59-700e-49a8-87f5-eac2580a1a54\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " Dec 05 01:20:00 crc kubenswrapper[4990]: I1205 01:20:00.094265 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fde7ef59-700e-49a8-87f5-eac2580a1a54-registry-tls\") pod \"fde7ef59-700e-49a8-87f5-eac2580a1a54\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " Dec 05 01:20:00 crc kubenswrapper[4990]: I1205 01:20:00.094320 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tswqd\" (UniqueName: \"kubernetes.io/projected/fde7ef59-700e-49a8-87f5-eac2580a1a54-kube-api-access-tswqd\") pod \"fde7ef59-700e-49a8-87f5-eac2580a1a54\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " Dec 05 01:20:00 crc kubenswrapper[4990]: I1205 01:20:00.094349 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fde7ef59-700e-49a8-87f5-eac2580a1a54-ca-trust-extracted\") pod \"fde7ef59-700e-49a8-87f5-eac2580a1a54\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " Dec 05 01:20:00 crc kubenswrapper[4990]: I1205 01:20:00.095135 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fde7ef59-700e-49a8-87f5-eac2580a1a54-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "fde7ef59-700e-49a8-87f5-eac2580a1a54" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:20:00 crc kubenswrapper[4990]: I1205 01:20:00.094474 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"fde7ef59-700e-49a8-87f5-eac2580a1a54\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " Dec 05 01:20:00 crc kubenswrapper[4990]: I1205 01:20:00.095747 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fde7ef59-700e-49a8-87f5-eac2580a1a54-bound-sa-token\") pod \"fde7ef59-700e-49a8-87f5-eac2580a1a54\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " Dec 05 01:20:00 crc kubenswrapper[4990]: I1205 01:20:00.095780 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fde7ef59-700e-49a8-87f5-eac2580a1a54-trusted-ca\") pod \"fde7ef59-700e-49a8-87f5-eac2580a1a54\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " Dec 05 01:20:00 crc kubenswrapper[4990]: I1205 01:20:00.095830 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fde7ef59-700e-49a8-87f5-eac2580a1a54-installation-pull-secrets\") pod \"fde7ef59-700e-49a8-87f5-eac2580a1a54\" (UID: \"fde7ef59-700e-49a8-87f5-eac2580a1a54\") " Dec 05 01:20:00 crc kubenswrapper[4990]: I1205 01:20:00.096181 4990 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fde7ef59-700e-49a8-87f5-eac2580a1a54-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 05 01:20:00 crc kubenswrapper[4990]: I1205 01:20:00.096629 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fde7ef59-700e-49a8-87f5-eac2580a1a54-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "fde7ef59-700e-49a8-87f5-eac2580a1a54" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:20:00 crc kubenswrapper[4990]: I1205 01:20:00.099994 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fde7ef59-700e-49a8-87f5-eac2580a1a54-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "fde7ef59-700e-49a8-87f5-eac2580a1a54" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:20:00 crc kubenswrapper[4990]: I1205 01:20:00.100414 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fde7ef59-700e-49a8-87f5-eac2580a1a54-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "fde7ef59-700e-49a8-87f5-eac2580a1a54" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:20:00 crc kubenswrapper[4990]: I1205 01:20:00.100644 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde7ef59-700e-49a8-87f5-eac2580a1a54-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "fde7ef59-700e-49a8-87f5-eac2580a1a54" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:20:00 crc kubenswrapper[4990]: I1205 01:20:00.100714 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fde7ef59-700e-49a8-87f5-eac2580a1a54-kube-api-access-tswqd" (OuterVolumeSpecName: "kube-api-access-tswqd") pod "fde7ef59-700e-49a8-87f5-eac2580a1a54" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54"). InnerVolumeSpecName "kube-api-access-tswqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:20:00 crc kubenswrapper[4990]: I1205 01:20:00.108002 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "fde7ef59-700e-49a8-87f5-eac2580a1a54" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 05 01:20:00 crc kubenswrapper[4990]: I1205 01:20:00.113829 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fde7ef59-700e-49a8-87f5-eac2580a1a54-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "fde7ef59-700e-49a8-87f5-eac2580a1a54" (UID: "fde7ef59-700e-49a8-87f5-eac2580a1a54"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:20:00 crc kubenswrapper[4990]: I1205 01:20:00.197831 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tswqd\" (UniqueName: \"kubernetes.io/projected/fde7ef59-700e-49a8-87f5-eac2580a1a54-kube-api-access-tswqd\") on node \"crc\" DevicePath \"\"" Dec 05 01:20:00 crc kubenswrapper[4990]: I1205 01:20:00.198080 4990 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fde7ef59-700e-49a8-87f5-eac2580a1a54-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 05 01:20:00 crc kubenswrapper[4990]: I1205 01:20:00.198160 4990 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fde7ef59-700e-49a8-87f5-eac2580a1a54-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 05 01:20:00 crc kubenswrapper[4990]: I1205 01:20:00.198242 4990 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fde7ef59-700e-49a8-87f5-eac2580a1a54-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 01:20:00 crc kubenswrapper[4990]: I1205 01:20:00.198309 4990 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fde7ef59-700e-49a8-87f5-eac2580a1a54-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 05 01:20:00 crc kubenswrapper[4990]: I1205 01:20:00.198368 4990 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fde7ef59-700e-49a8-87f5-eac2580a1a54-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 05 01:20:00 crc kubenswrapper[4990]: I1205 01:20:00.385841 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vchzm"] Dec 05 01:20:00 crc kubenswrapper[4990]: I1205 01:20:00.392390 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vchzm"] Dec 05 01:20:01 crc kubenswrapper[4990]: I1205 01:20:01.941618 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fde7ef59-700e-49a8-87f5-eac2580a1a54" path="/var/lib/kubelet/pods/fde7ef59-700e-49a8-87f5-eac2580a1a54/volumes" Dec 05 01:20:53 crc kubenswrapper[4990]: I1205 01:20:53.222139 4990 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 05 01:21:21 crc kubenswrapper[4990]: I1205 01:21:21.824511 4990 patch_prober.go:28] interesting pod/machine-config-daemon-zxlh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 01:21:21 crc kubenswrapper[4990]: I1205 01:21:21.825256 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 01:21:51 crc kubenswrapper[4990]: I1205 01:21:51.824135 4990 patch_prober.go:28] interesting pod/machine-config-daemon-zxlh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 01:21:51 crc kubenswrapper[4990]: I1205 01:21:51.824842 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 01:22:21 crc kubenswrapper[4990]: I1205 01:22:21.824300 4990 patch_prober.go:28] interesting pod/machine-config-daemon-zxlh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 01:22:21 crc kubenswrapper[4990]: I1205 01:22:21.825032 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 01:22:21 crc kubenswrapper[4990]: I1205 01:22:21.825095 4990 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" Dec 05 01:22:21 crc kubenswrapper[4990]: I1205 01:22:21.825890 4990 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e80af07b88563f0c4908362eee70be4cc7b74f59335c734b90ad5639312c2fd8"} pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 01:22:21 crc kubenswrapper[4990]: I1205 01:22:21.825987 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" containerID="cri-o://e80af07b88563f0c4908362eee70be4cc7b74f59335c734b90ad5639312c2fd8" gracePeriod=600 Dec 05 01:22:21 crc kubenswrapper[4990]: I1205 01:22:21.967081 4990 generic.go:334] "Generic (PLEG): container finished" podID="b6580a04-67de-48f9-9da2-56cb4377af48" containerID="e80af07b88563f0c4908362eee70be4cc7b74f59335c734b90ad5639312c2fd8" exitCode=0 Dec 05 01:22:21 crc kubenswrapper[4990]: I1205 01:22:21.967146 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" event={"ID":"b6580a04-67de-48f9-9da2-56cb4377af48","Type":"ContainerDied","Data":"e80af07b88563f0c4908362eee70be4cc7b74f59335c734b90ad5639312c2fd8"} Dec 05 01:22:21 crc kubenswrapper[4990]: I1205 01:22:21.967390 4990 scope.go:117] "RemoveContainer" containerID="0fff34981cf4773bd36204e463d90f40dceebdf614dbe550df05744f4f5aade7" Dec 05 01:22:22 crc kubenswrapper[4990]: I1205 01:22:22.977884 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" event={"ID":"b6580a04-67de-48f9-9da2-56cb4377af48","Type":"ContainerStarted","Data":"6f2b4a96536639cbb9d3bc8aec6f26003832337aeb02bfe5ac6cc1d82eae2a27"} Dec 05 01:23:26 crc kubenswrapper[4990]: I1205 01:23:26.441526 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w5j44"] Dec 05 01:23:26 crc kubenswrapper[4990]: E1205 01:23:26.442724 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fde7ef59-700e-49a8-87f5-eac2580a1a54" containerName="registry" Dec 05 01:23:26 crc kubenswrapper[4990]: I1205 01:23:26.442755 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="fde7ef59-700e-49a8-87f5-eac2580a1a54" containerName="registry" Dec 05 01:23:26 crc kubenswrapper[4990]: I1205 01:23:26.442984 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="fde7ef59-700e-49a8-87f5-eac2580a1a54" containerName="registry" Dec 05 01:23:26 crc kubenswrapper[4990]: I1205 01:23:26.444349 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w5j44" Dec 05 01:23:26 crc kubenswrapper[4990]: I1205 01:23:26.462730 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w5j44"] Dec 05 01:23:26 crc kubenswrapper[4990]: I1205 01:23:26.544647 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw8bh\" (UniqueName: \"kubernetes.io/projected/0df03a90-4e35-4c96-9028-f04d2c2d7a0c-kube-api-access-kw8bh\") pod \"redhat-operators-w5j44\" (UID: \"0df03a90-4e35-4c96-9028-f04d2c2d7a0c\") " pod="openshift-marketplace/redhat-operators-w5j44" Dec 05 01:23:26 crc kubenswrapper[4990]: I1205 01:23:26.544738 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0df03a90-4e35-4c96-9028-f04d2c2d7a0c-utilities\") pod \"redhat-operators-w5j44\" (UID: \"0df03a90-4e35-4c96-9028-f04d2c2d7a0c\") " pod="openshift-marketplace/redhat-operators-w5j44" Dec 05 01:23:26 crc kubenswrapper[4990]: I1205 01:23:26.544885 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0df03a90-4e35-4c96-9028-f04d2c2d7a0c-catalog-content\") pod \"redhat-operators-w5j44\" (UID: \"0df03a90-4e35-4c96-9028-f04d2c2d7a0c\") " pod="openshift-marketplace/redhat-operators-w5j44" Dec 05 01:23:26 crc kubenswrapper[4990]: I1205 01:23:26.646892 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw8bh\" (UniqueName: \"kubernetes.io/projected/0df03a90-4e35-4c96-9028-f04d2c2d7a0c-kube-api-access-kw8bh\") pod \"redhat-operators-w5j44\" (UID: \"0df03a90-4e35-4c96-9028-f04d2c2d7a0c\") " pod="openshift-marketplace/redhat-operators-w5j44" Dec 05 01:23:26 crc kubenswrapper[4990]: I1205 01:23:26.647024 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0df03a90-4e35-4c96-9028-f04d2c2d7a0c-utilities\") pod \"redhat-operators-w5j44\" (UID: \"0df03a90-4e35-4c96-9028-f04d2c2d7a0c\") " pod="openshift-marketplace/redhat-operators-w5j44" Dec 05 01:23:26 crc kubenswrapper[4990]: I1205 01:23:26.647097 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0df03a90-4e35-4c96-9028-f04d2c2d7a0c-catalog-content\") pod \"redhat-operators-w5j44\" (UID: \"0df03a90-4e35-4c96-9028-f04d2c2d7a0c\") " pod="openshift-marketplace/redhat-operators-w5j44" Dec 05 01:23:26 crc kubenswrapper[4990]: I1205 01:23:26.647747 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0df03a90-4e35-4c96-9028-f04d2c2d7a0c-catalog-content\") pod \"redhat-operators-w5j44\" (UID: \"0df03a90-4e35-4c96-9028-f04d2c2d7a0c\") " pod="openshift-marketplace/redhat-operators-w5j44" Dec 05 01:23:26 crc kubenswrapper[4990]: I1205 01:23:26.647905 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0df03a90-4e35-4c96-9028-f04d2c2d7a0c-utilities\") pod \"redhat-operators-w5j44\" (UID: \"0df03a90-4e35-4c96-9028-f04d2c2d7a0c\") " pod="openshift-marketplace/redhat-operators-w5j44" Dec 05 01:23:26 crc kubenswrapper[4990]: I1205 01:23:26.672153 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw8bh\" (UniqueName: \"kubernetes.io/projected/0df03a90-4e35-4c96-9028-f04d2c2d7a0c-kube-api-access-kw8bh\") pod \"redhat-operators-w5j44\" (UID: \"0df03a90-4e35-4c96-9028-f04d2c2d7a0c\") " pod="openshift-marketplace/redhat-operators-w5j44" Dec 05 01:23:26 crc kubenswrapper[4990]: I1205 01:23:26.769697 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w5j44" Dec 05 01:23:26 crc kubenswrapper[4990]: I1205 01:23:26.975203 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w5j44"] Dec 05 01:23:27 crc kubenswrapper[4990]: I1205 01:23:27.403952 4990 generic.go:334] "Generic (PLEG): container finished" podID="0df03a90-4e35-4c96-9028-f04d2c2d7a0c" containerID="2b0d2fa50c121518088c310719815fa9cdcb6c836cc8926fd9f092cab02cfcbe" exitCode=0 Dec 05 01:23:27 crc kubenswrapper[4990]: I1205 01:23:27.403993 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w5j44" event={"ID":"0df03a90-4e35-4c96-9028-f04d2c2d7a0c","Type":"ContainerDied","Data":"2b0d2fa50c121518088c310719815fa9cdcb6c836cc8926fd9f092cab02cfcbe"} Dec 05 01:23:27 crc kubenswrapper[4990]: I1205 01:23:27.404040 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w5j44" event={"ID":"0df03a90-4e35-4c96-9028-f04d2c2d7a0c","Type":"ContainerStarted","Data":"d9cc25dfa7196303e253b467dda2cef7086789ac380e04854b906357dba0be28"} Dec 05 01:23:27 crc kubenswrapper[4990]: I1205 01:23:27.405382 4990 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 01:23:28 crc kubenswrapper[4990]: I1205 01:23:28.415456 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w5j44" event={"ID":"0df03a90-4e35-4c96-9028-f04d2c2d7a0c","Type":"ContainerStarted","Data":"6fc76d3158cf59d3722f74b8c20bd758876f173a2a31fac526d309fdd7e768be"} Dec 05 01:23:29 crc kubenswrapper[4990]: I1205 01:23:29.425551 4990 generic.go:334] "Generic (PLEG): container finished" podID="0df03a90-4e35-4c96-9028-f04d2c2d7a0c" containerID="6fc76d3158cf59d3722f74b8c20bd758876f173a2a31fac526d309fdd7e768be" exitCode=0 Dec 05 01:23:29 crc kubenswrapper[4990]: I1205 01:23:29.425640 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w5j44" event={"ID":"0df03a90-4e35-4c96-9028-f04d2c2d7a0c","Type":"ContainerDied","Data":"6fc76d3158cf59d3722f74b8c20bd758876f173a2a31fac526d309fdd7e768be"} Dec 05 01:23:30 crc kubenswrapper[4990]: I1205 01:23:30.434783 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w5j44" event={"ID":"0df03a90-4e35-4c96-9028-f04d2c2d7a0c","Type":"ContainerStarted","Data":"2d356c3e786b3416a6c6d6d7484d2f2f022c880cdb494b46c626d07c377b4332"} Dec 05 01:23:30 crc kubenswrapper[4990]: I1205 01:23:30.458827 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w5j44" podStartSLOduration=2.033515294 podStartE2EDuration="4.458792709s" podCreationTimestamp="2025-12-05 01:23:26 +0000 UTC" firstStartedPulling="2025-12-05 01:23:27.405175844 +0000 UTC m=+905.781391205" lastFinishedPulling="2025-12-05 01:23:29.830453249 +0000 UTC m=+908.206668620" observedRunningTime="2025-12-05 01:23:30.4567265 +0000 UTC m=+908.832941911" watchObservedRunningTime="2025-12-05 01:23:30.458792709 +0000 UTC m=+908.835008110" Dec 05 01:23:36 crc kubenswrapper[4990]: I1205 01:23:36.770860 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w5j44" Dec 05 01:23:36 crc kubenswrapper[4990]: I1205 01:23:36.771554 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w5j44" Dec 05 01:23:36 crc kubenswrapper[4990]: I1205 01:23:36.821259 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w5j44" Dec 05 01:23:37 crc kubenswrapper[4990]: I1205 01:23:37.538115 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w5j44" Dec 05 01:23:37 crc kubenswrapper[4990]: I1205 01:23:37.597077 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w5j44"] Dec 05 01:23:39 crc kubenswrapper[4990]: I1205 01:23:39.487791 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w5j44" podUID="0df03a90-4e35-4c96-9028-f04d2c2d7a0c" containerName="registry-server" containerID="cri-o://2d356c3e786b3416a6c6d6d7484d2f2f022c880cdb494b46c626d07c377b4332" gracePeriod=2 Dec 05 01:23:41 crc kubenswrapper[4990]: I1205 01:23:41.502313 4990 generic.go:334] "Generic (PLEG): container finished" podID="0df03a90-4e35-4c96-9028-f04d2c2d7a0c" containerID="2d356c3e786b3416a6c6d6d7484d2f2f022c880cdb494b46c626d07c377b4332" exitCode=0 Dec 05 01:23:41 crc kubenswrapper[4990]: I1205 01:23:41.502399 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w5j44" event={"ID":"0df03a90-4e35-4c96-9028-f04d2c2d7a0c","Type":"ContainerDied","Data":"2d356c3e786b3416a6c6d6d7484d2f2f022c880cdb494b46c626d07c377b4332"} Dec 05 01:23:41 crc kubenswrapper[4990]: I1205 01:23:41.598712 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w5j44" Dec 05 01:23:41 crc kubenswrapper[4990]: I1205 01:23:41.660544 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0df03a90-4e35-4c96-9028-f04d2c2d7a0c-catalog-content\") pod \"0df03a90-4e35-4c96-9028-f04d2c2d7a0c\" (UID: \"0df03a90-4e35-4c96-9028-f04d2c2d7a0c\") " Dec 05 01:23:41 crc kubenswrapper[4990]: I1205 01:23:41.660594 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0df03a90-4e35-4c96-9028-f04d2c2d7a0c-utilities\") pod \"0df03a90-4e35-4c96-9028-f04d2c2d7a0c\" (UID: \"0df03a90-4e35-4c96-9028-f04d2c2d7a0c\") " Dec 05 01:23:41 crc kubenswrapper[4990]: I1205 01:23:41.660625 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw8bh\" (UniqueName: \"kubernetes.io/projected/0df03a90-4e35-4c96-9028-f04d2c2d7a0c-kube-api-access-kw8bh\") pod \"0df03a90-4e35-4c96-9028-f04d2c2d7a0c\" (UID: \"0df03a90-4e35-4c96-9028-f04d2c2d7a0c\") " Dec 05 01:23:41 crc kubenswrapper[4990]: I1205 01:23:41.662731 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0df03a90-4e35-4c96-9028-f04d2c2d7a0c-utilities" (OuterVolumeSpecName: "utilities") pod "0df03a90-4e35-4c96-9028-f04d2c2d7a0c" (UID: "0df03a90-4e35-4c96-9028-f04d2c2d7a0c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:23:41 crc kubenswrapper[4990]: I1205 01:23:41.668855 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0df03a90-4e35-4c96-9028-f04d2c2d7a0c-kube-api-access-kw8bh" (OuterVolumeSpecName: "kube-api-access-kw8bh") pod "0df03a90-4e35-4c96-9028-f04d2c2d7a0c" (UID: "0df03a90-4e35-4c96-9028-f04d2c2d7a0c"). InnerVolumeSpecName "kube-api-access-kw8bh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:23:41 crc kubenswrapper[4990]: I1205 01:23:41.762149 4990 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0df03a90-4e35-4c96-9028-f04d2c2d7a0c-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 01:23:41 crc kubenswrapper[4990]: I1205 01:23:41.762200 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kw8bh\" (UniqueName: \"kubernetes.io/projected/0df03a90-4e35-4c96-9028-f04d2c2d7a0c-kube-api-access-kw8bh\") on node \"crc\" DevicePath \"\"" Dec 05 01:23:41 crc kubenswrapper[4990]: I1205 01:23:41.806350 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0df03a90-4e35-4c96-9028-f04d2c2d7a0c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0df03a90-4e35-4c96-9028-f04d2c2d7a0c" (UID: "0df03a90-4e35-4c96-9028-f04d2c2d7a0c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:23:41 crc kubenswrapper[4990]: I1205 01:23:41.863501 4990 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0df03a90-4e35-4c96-9028-f04d2c2d7a0c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 01:23:42 crc kubenswrapper[4990]: I1205 01:23:42.511906 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w5j44" event={"ID":"0df03a90-4e35-4c96-9028-f04d2c2d7a0c","Type":"ContainerDied","Data":"d9cc25dfa7196303e253b467dda2cef7086789ac380e04854b906357dba0be28"} Dec 05 01:23:42 crc kubenswrapper[4990]: I1205 01:23:42.511953 4990 scope.go:117] "RemoveContainer" containerID="2d356c3e786b3416a6c6d6d7484d2f2f022c880cdb494b46c626d07c377b4332" Dec 05 01:23:42 crc kubenswrapper[4990]: I1205 01:23:42.512019 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w5j44" Dec 05 01:23:42 crc kubenswrapper[4990]: I1205 01:23:42.534851 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w5j44"] Dec 05 01:23:42 crc kubenswrapper[4990]: I1205 01:23:42.537249 4990 scope.go:117] "RemoveContainer" containerID="6fc76d3158cf59d3722f74b8c20bd758876f173a2a31fac526d309fdd7e768be" Dec 05 01:23:42 crc kubenswrapper[4990]: I1205 01:23:42.556223 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w5j44"] Dec 05 01:23:42 crc kubenswrapper[4990]: I1205 01:23:42.562859 4990 scope.go:117] "RemoveContainer" containerID="2b0d2fa50c121518088c310719815fa9cdcb6c836cc8926fd9f092cab02cfcbe" Dec 05 01:23:43 crc kubenswrapper[4990]: I1205 01:23:43.942757 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0df03a90-4e35-4c96-9028-f04d2c2d7a0c" path="/var/lib/kubelet/pods/0df03a90-4e35-4c96-9028-f04d2c2d7a0c/volumes" Dec 05 01:23:45 crc kubenswrapper[4990]: I1205 01:23:45.359095 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7zk5j"] Dec 05 01:23:45 crc kubenswrapper[4990]: E1205 01:23:45.360877 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0df03a90-4e35-4c96-9028-f04d2c2d7a0c" containerName="registry-server" Dec 05 01:23:45 crc kubenswrapper[4990]: I1205 01:23:45.361062 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="0df03a90-4e35-4c96-9028-f04d2c2d7a0c" containerName="registry-server" Dec 05 01:23:45 crc kubenswrapper[4990]: E1205 01:23:45.361308 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0df03a90-4e35-4c96-9028-f04d2c2d7a0c" containerName="extract-content" Dec 05 01:23:45 crc kubenswrapper[4990]: I1205 01:23:45.364279 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="0df03a90-4e35-4c96-9028-f04d2c2d7a0c" containerName="extract-content" Dec 05 01:23:45 crc kubenswrapper[4990]: E1205 01:23:45.364530 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0df03a90-4e35-4c96-9028-f04d2c2d7a0c" containerName="extract-utilities" Dec 05 01:23:45 crc kubenswrapper[4990]: I1205 01:23:45.364698 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="0df03a90-4e35-4c96-9028-f04d2c2d7a0c" containerName="extract-utilities" Dec 05 01:23:45 crc kubenswrapper[4990]: I1205 01:23:45.365062 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="0df03a90-4e35-4c96-9028-f04d2c2d7a0c" containerName="registry-server" Dec 05 01:23:45 crc kubenswrapper[4990]: I1205 01:23:45.366529 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7zk5j" Dec 05 01:23:45 crc kubenswrapper[4990]: I1205 01:23:45.373576 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7zk5j"] Dec 05 01:23:45 crc kubenswrapper[4990]: I1205 01:23:45.508879 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7576a00e-b357-4dcf-849f-4c639425e6f4-utilities\") pod \"redhat-marketplace-7zk5j\" (UID: \"7576a00e-b357-4dcf-849f-4c639425e6f4\") " pod="openshift-marketplace/redhat-marketplace-7zk5j" Dec 05 01:23:45 crc kubenswrapper[4990]: I1205 01:23:45.509020 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7576a00e-b357-4dcf-849f-4c639425e6f4-catalog-content\") pod \"redhat-marketplace-7zk5j\" (UID: \"7576a00e-b357-4dcf-849f-4c639425e6f4\") " pod="openshift-marketplace/redhat-marketplace-7zk5j" Dec 05 01:23:45 crc kubenswrapper[4990]: I1205 01:23:45.509058 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xwqv\" (UniqueName: \"kubernetes.io/projected/7576a00e-b357-4dcf-849f-4c639425e6f4-kube-api-access-6xwqv\") pod \"redhat-marketplace-7zk5j\" (UID: \"7576a00e-b357-4dcf-849f-4c639425e6f4\") " pod="openshift-marketplace/redhat-marketplace-7zk5j" Dec 05 01:23:45 crc kubenswrapper[4990]: I1205 01:23:45.610615 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7576a00e-b357-4dcf-849f-4c639425e6f4-catalog-content\") pod \"redhat-marketplace-7zk5j\" (UID: \"7576a00e-b357-4dcf-849f-4c639425e6f4\") " pod="openshift-marketplace/redhat-marketplace-7zk5j" Dec 05 01:23:45 crc kubenswrapper[4990]: I1205 01:23:45.610773 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xwqv\" (UniqueName: \"kubernetes.io/projected/7576a00e-b357-4dcf-849f-4c639425e6f4-kube-api-access-6xwqv\") pod \"redhat-marketplace-7zk5j\" (UID: \"7576a00e-b357-4dcf-849f-4c639425e6f4\") " pod="openshift-marketplace/redhat-marketplace-7zk5j" Dec 05 01:23:45 crc kubenswrapper[4990]: I1205 01:23:45.610861 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7576a00e-b357-4dcf-849f-4c639425e6f4-utilities\") pod \"redhat-marketplace-7zk5j\" (UID: \"7576a00e-b357-4dcf-849f-4c639425e6f4\") " pod="openshift-marketplace/redhat-marketplace-7zk5j" Dec 05 01:23:45 crc kubenswrapper[4990]: I1205 01:23:45.611576 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7576a00e-b357-4dcf-849f-4c639425e6f4-catalog-content\") pod \"redhat-marketplace-7zk5j\" (UID: \"7576a00e-b357-4dcf-849f-4c639425e6f4\") " pod="openshift-marketplace/redhat-marketplace-7zk5j" Dec 05 01:23:45 crc kubenswrapper[4990]: I1205 01:23:45.611668 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7576a00e-b357-4dcf-849f-4c639425e6f4-utilities\") pod \"redhat-marketplace-7zk5j\" (UID: \"7576a00e-b357-4dcf-849f-4c639425e6f4\") " pod="openshift-marketplace/redhat-marketplace-7zk5j" Dec 05 01:23:45 crc kubenswrapper[4990]: I1205 01:23:45.629345 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xwqv\" (UniqueName: \"kubernetes.io/projected/7576a00e-b357-4dcf-849f-4c639425e6f4-kube-api-access-6xwqv\") pod \"redhat-marketplace-7zk5j\" (UID: \"7576a00e-b357-4dcf-849f-4c639425e6f4\") " pod="openshift-marketplace/redhat-marketplace-7zk5j" Dec 05 01:23:45 crc kubenswrapper[4990]: I1205 01:23:45.693585 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7zk5j" Dec 05 01:23:45 crc kubenswrapper[4990]: I1205 01:23:45.912222 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7zk5j"] Dec 05 01:23:46 crc kubenswrapper[4990]: I1205 01:23:46.538255 4990 generic.go:334] "Generic (PLEG): container finished" podID="7576a00e-b357-4dcf-849f-4c639425e6f4" containerID="2ee46234db97763b5e1eb63cfad42f361bd13195f7918b8edb8db622e90ab6e9" exitCode=0 Dec 05 01:23:46 crc kubenswrapper[4990]: I1205 01:23:46.538398 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7zk5j" event={"ID":"7576a00e-b357-4dcf-849f-4c639425e6f4","Type":"ContainerDied","Data":"2ee46234db97763b5e1eb63cfad42f361bd13195f7918b8edb8db622e90ab6e9"} Dec 05 01:23:46 crc kubenswrapper[4990]: I1205 01:23:46.538871 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7zk5j" event={"ID":"7576a00e-b357-4dcf-849f-4c639425e6f4","Type":"ContainerStarted","Data":"d9368650667393f0925f61cc919bdc468d3c72ad43fe8c395372531778d78287"} Dec 05 01:23:47 crc kubenswrapper[4990]: I1205 01:23:47.547668 4990 generic.go:334] "Generic (PLEG): container finished" podID="7576a00e-b357-4dcf-849f-4c639425e6f4" containerID="86e1d92aecc104cf5502a21f779e58831020afd12538b7cbd5eedebd8366bddd" exitCode=0 Dec 05 01:23:47 crc kubenswrapper[4990]: I1205 01:23:47.547728 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7zk5j" event={"ID":"7576a00e-b357-4dcf-849f-4c639425e6f4","Type":"ContainerDied","Data":"86e1d92aecc104cf5502a21f779e58831020afd12538b7cbd5eedebd8366bddd"} Dec 05 01:23:48 crc kubenswrapper[4990]: I1205 01:23:48.560350 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7zk5j" event={"ID":"7576a00e-b357-4dcf-849f-4c639425e6f4","Type":"ContainerStarted","Data":"db5be12c3e05c3c8ac8a6c168f9893a51cdd4ae3cd90bd29fe60ae133dcd4655"} Dec 05 01:23:48 crc kubenswrapper[4990]: I1205 01:23:48.594034 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7zk5j" podStartSLOduration=2.194872234 podStartE2EDuration="3.594006468s" podCreationTimestamp="2025-12-05 01:23:45 +0000 UTC" firstStartedPulling="2025-12-05 01:23:46.540229086 +0000 UTC m=+924.916444457" lastFinishedPulling="2025-12-05 01:23:47.93936329 +0000 UTC m=+926.315578691" observedRunningTime="2025-12-05 01:23:48.587885584 +0000 UTC m=+926.964100985" watchObservedRunningTime="2025-12-05 01:23:48.594006468 +0000 UTC m=+926.970221859" Dec 05 01:23:55 crc kubenswrapper[4990]: I1205 01:23:55.694776 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7zk5j" Dec 05 01:23:55 crc kubenswrapper[4990]: I1205 01:23:55.695554 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7zk5j" Dec 05 01:23:55 crc kubenswrapper[4990]: I1205 01:23:55.762474 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7zk5j" Dec 05 01:23:56 crc kubenswrapper[4990]: I1205 01:23:56.665053 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7zk5j" Dec 05 01:23:56 crc kubenswrapper[4990]: I1205 01:23:56.998902 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7zk5j"] Dec 05 01:23:58 crc kubenswrapper[4990]: I1205 01:23:58.620574 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7zk5j" podUID="7576a00e-b357-4dcf-849f-4c639425e6f4" containerName="registry-server" containerID="cri-o://db5be12c3e05c3c8ac8a6c168f9893a51cdd4ae3cd90bd29fe60ae133dcd4655" gracePeriod=2 Dec 05 01:23:59 crc kubenswrapper[4990]: I1205 01:23:59.030400 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7zk5j" Dec 05 01:23:59 crc kubenswrapper[4990]: I1205 01:23:59.198037 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7576a00e-b357-4dcf-849f-4c639425e6f4-utilities\") pod \"7576a00e-b357-4dcf-849f-4c639425e6f4\" (UID: \"7576a00e-b357-4dcf-849f-4c639425e6f4\") " Dec 05 01:23:59 crc kubenswrapper[4990]: I1205 01:23:59.198132 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7576a00e-b357-4dcf-849f-4c639425e6f4-catalog-content\") pod \"7576a00e-b357-4dcf-849f-4c639425e6f4\" (UID: \"7576a00e-b357-4dcf-849f-4c639425e6f4\") " Dec 05 01:23:59 crc kubenswrapper[4990]: I1205 01:23:59.198237 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xwqv\" (UniqueName: \"kubernetes.io/projected/7576a00e-b357-4dcf-849f-4c639425e6f4-kube-api-access-6xwqv\") pod \"7576a00e-b357-4dcf-849f-4c639425e6f4\" (UID: \"7576a00e-b357-4dcf-849f-4c639425e6f4\") " Dec 05 01:23:59 crc kubenswrapper[4990]: I1205 01:23:59.199399 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7576a00e-b357-4dcf-849f-4c639425e6f4-utilities" (OuterVolumeSpecName: "utilities") pod "7576a00e-b357-4dcf-849f-4c639425e6f4" (UID: "7576a00e-b357-4dcf-849f-4c639425e6f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:23:59 crc kubenswrapper[4990]: I1205 01:23:59.207317 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7576a00e-b357-4dcf-849f-4c639425e6f4-kube-api-access-6xwqv" (OuterVolumeSpecName: "kube-api-access-6xwqv") pod "7576a00e-b357-4dcf-849f-4c639425e6f4" (UID: "7576a00e-b357-4dcf-849f-4c639425e6f4"). InnerVolumeSpecName "kube-api-access-6xwqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:23:59 crc kubenswrapper[4990]: I1205 01:23:59.231405 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7576a00e-b357-4dcf-849f-4c639425e6f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7576a00e-b357-4dcf-849f-4c639425e6f4" (UID: "7576a00e-b357-4dcf-849f-4c639425e6f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:23:59 crc kubenswrapper[4990]: I1205 01:23:59.299545 4990 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7576a00e-b357-4dcf-849f-4c639425e6f4-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 01:23:59 crc kubenswrapper[4990]: I1205 01:23:59.299587 4990 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7576a00e-b357-4dcf-849f-4c639425e6f4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 01:23:59 crc kubenswrapper[4990]: I1205 01:23:59.299609 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xwqv\" (UniqueName: \"kubernetes.io/projected/7576a00e-b357-4dcf-849f-4c639425e6f4-kube-api-access-6xwqv\") on node \"crc\" DevicePath \"\"" Dec 05 01:23:59 crc kubenswrapper[4990]: I1205 01:23:59.632786 4990 generic.go:334] "Generic (PLEG): container finished" podID="7576a00e-b357-4dcf-849f-4c639425e6f4" containerID="db5be12c3e05c3c8ac8a6c168f9893a51cdd4ae3cd90bd29fe60ae133dcd4655" exitCode=0 Dec 05 01:23:59 crc kubenswrapper[4990]: I1205 01:23:59.632870 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7zk5j" event={"ID":"7576a00e-b357-4dcf-849f-4c639425e6f4","Type":"ContainerDied","Data":"db5be12c3e05c3c8ac8a6c168f9893a51cdd4ae3cd90bd29fe60ae133dcd4655"} Dec 05 01:23:59 crc kubenswrapper[4990]: I1205 01:23:59.632931 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7zk5j" event={"ID":"7576a00e-b357-4dcf-849f-4c639425e6f4","Type":"ContainerDied","Data":"d9368650667393f0925f61cc919bdc468d3c72ad43fe8c395372531778d78287"} Dec 05 01:23:59 crc kubenswrapper[4990]: I1205 01:23:59.632967 4990 scope.go:117] "RemoveContainer" containerID="db5be12c3e05c3c8ac8a6c168f9893a51cdd4ae3cd90bd29fe60ae133dcd4655" Dec 05 01:23:59 crc kubenswrapper[4990]: I1205 01:23:59.633273 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7zk5j" Dec 05 01:23:59 crc kubenswrapper[4990]: I1205 01:23:59.667567 4990 scope.go:117] "RemoveContainer" containerID="86e1d92aecc104cf5502a21f779e58831020afd12538b7cbd5eedebd8366bddd" Dec 05 01:23:59 crc kubenswrapper[4990]: I1205 01:23:59.690012 4990 scope.go:117] "RemoveContainer" containerID="2ee46234db97763b5e1eb63cfad42f361bd13195f7918b8edb8db622e90ab6e9" Dec 05 01:23:59 crc kubenswrapper[4990]: I1205 01:23:59.695141 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7zk5j"] Dec 05 01:23:59 crc kubenswrapper[4990]: I1205 01:23:59.699305 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7zk5j"] Dec 05 01:23:59 crc kubenswrapper[4990]: I1205 01:23:59.720459 4990 scope.go:117] "RemoveContainer" containerID="db5be12c3e05c3c8ac8a6c168f9893a51cdd4ae3cd90bd29fe60ae133dcd4655" Dec 05 01:23:59 crc kubenswrapper[4990]: E1205 01:23:59.721067 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db5be12c3e05c3c8ac8a6c168f9893a51cdd4ae3cd90bd29fe60ae133dcd4655\": container with ID starting with db5be12c3e05c3c8ac8a6c168f9893a51cdd4ae3cd90bd29fe60ae133dcd4655 not found: ID does not exist" containerID="db5be12c3e05c3c8ac8a6c168f9893a51cdd4ae3cd90bd29fe60ae133dcd4655" Dec 05 01:23:59 crc kubenswrapper[4990]: I1205 01:23:59.721103 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db5be12c3e05c3c8ac8a6c168f9893a51cdd4ae3cd90bd29fe60ae133dcd4655"} err="failed to get container status \"db5be12c3e05c3c8ac8a6c168f9893a51cdd4ae3cd90bd29fe60ae133dcd4655\": rpc error: code = NotFound desc = could not find container \"db5be12c3e05c3c8ac8a6c168f9893a51cdd4ae3cd90bd29fe60ae133dcd4655\": container with ID starting with db5be12c3e05c3c8ac8a6c168f9893a51cdd4ae3cd90bd29fe60ae133dcd4655 not found: ID does not exist" Dec 05 01:23:59 crc kubenswrapper[4990]: I1205 01:23:59.721127 4990 scope.go:117] "RemoveContainer" containerID="86e1d92aecc104cf5502a21f779e58831020afd12538b7cbd5eedebd8366bddd" Dec 05 01:23:59 crc kubenswrapper[4990]: E1205 01:23:59.721794 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86e1d92aecc104cf5502a21f779e58831020afd12538b7cbd5eedebd8366bddd\": container with ID starting with 86e1d92aecc104cf5502a21f779e58831020afd12538b7cbd5eedebd8366bddd not found: ID does not exist" containerID="86e1d92aecc104cf5502a21f779e58831020afd12538b7cbd5eedebd8366bddd" Dec 05 01:23:59 crc kubenswrapper[4990]: I1205 01:23:59.721910 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86e1d92aecc104cf5502a21f779e58831020afd12538b7cbd5eedebd8366bddd"} err="failed to get container status \"86e1d92aecc104cf5502a21f779e58831020afd12538b7cbd5eedebd8366bddd\": rpc error: code = NotFound desc = could not find container \"86e1d92aecc104cf5502a21f779e58831020afd12538b7cbd5eedebd8366bddd\": container with ID starting with 86e1d92aecc104cf5502a21f779e58831020afd12538b7cbd5eedebd8366bddd not found: ID does not exist" Dec 05 01:23:59 crc kubenswrapper[4990]: I1205 01:23:59.722028 4990 scope.go:117] "RemoveContainer" containerID="2ee46234db97763b5e1eb63cfad42f361bd13195f7918b8edb8db622e90ab6e9" Dec 05 01:23:59 crc kubenswrapper[4990]: E1205 01:23:59.722638 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ee46234db97763b5e1eb63cfad42f361bd13195f7918b8edb8db622e90ab6e9\": container with ID starting with 2ee46234db97763b5e1eb63cfad42f361bd13195f7918b8edb8db622e90ab6e9 not found: ID does not exist" containerID="2ee46234db97763b5e1eb63cfad42f361bd13195f7918b8edb8db622e90ab6e9" Dec 05 01:23:59 crc kubenswrapper[4990]: I1205 01:23:59.722663 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ee46234db97763b5e1eb63cfad42f361bd13195f7918b8edb8db622e90ab6e9"} err="failed to get container status \"2ee46234db97763b5e1eb63cfad42f361bd13195f7918b8edb8db622e90ab6e9\": rpc error: code = NotFound desc = could not find container \"2ee46234db97763b5e1eb63cfad42f361bd13195f7918b8edb8db622e90ab6e9\": container with ID starting with 2ee46234db97763b5e1eb63cfad42f361bd13195f7918b8edb8db622e90ab6e9 not found: ID does not exist" Dec 05 01:23:59 crc kubenswrapper[4990]: I1205 01:23:59.941115 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7576a00e-b357-4dcf-849f-4c639425e6f4" path="/var/lib/kubelet/pods/7576a00e-b357-4dcf-849f-4c639425e6f4/volumes" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.462117 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4w6g9"] Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.463094 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="ovn-controller" containerID="cri-o://758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38" gracePeriod=30 Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.463212 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="northd" containerID="cri-o://cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e" gracePeriod=30 Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.463166 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="nbdb" containerID="cri-o://27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691" gracePeriod=30 Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.463251 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="sbdb" containerID="cri-o://de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d" gracePeriod=30 Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.463281 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896" gracePeriod=30 Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.463358 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="kube-rbac-proxy-node" containerID="cri-o://97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05" gracePeriod=30 Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.463352 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="ovn-acl-logging" containerID="cri-o://d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b" gracePeriod=30 Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.510015 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="ovnkube-controller" containerID="cri-o://f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500" gracePeriod=30 Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.769224 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4w6g9_3eeec70d-1c5c-434e-90bc-95620458151c/ovnkube-controller/3.log" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.772766 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4w6g9_3eeec70d-1c5c-434e-90bc-95620458151c/ovn-acl-logging/0.log" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.773611 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4w6g9_3eeec70d-1c5c-434e-90bc-95620458151c/ovn-controller/0.log" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.774392 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.823049 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5fc5t"] Dec 05 01:24:22 crc kubenswrapper[4990]: E1205 01:24:22.823286 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="ovn-controller" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.823297 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="ovn-controller" Dec 05 01:24:22 crc kubenswrapper[4990]: E1205 01:24:22.823306 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="kube-rbac-proxy-ovn-metrics" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.823330 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="kube-rbac-proxy-ovn-metrics" Dec 05 01:24:22 crc kubenswrapper[4990]: E1205 01:24:22.823339 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7576a00e-b357-4dcf-849f-4c639425e6f4" containerName="extract-content" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.823345 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="7576a00e-b357-4dcf-849f-4c639425e6f4" containerName="extract-content" Dec 05 01:24:22 crc kubenswrapper[4990]: E1205 01:24:22.823353 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="ovn-acl-logging" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.823359 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="ovn-acl-logging" Dec 05 01:24:22 crc kubenswrapper[4990]: E1205 01:24:22.823369 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="ovnkube-controller" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.823376 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="ovnkube-controller" Dec 05 01:24:22 crc kubenswrapper[4990]: E1205 01:24:22.823383 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7576a00e-b357-4dcf-849f-4c639425e6f4" containerName="registry-server" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.823407 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="7576a00e-b357-4dcf-849f-4c639425e6f4" containerName="registry-server" Dec 05 01:24:22 crc kubenswrapper[4990]: E1205 01:24:22.823417 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="sbdb" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.823424 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="sbdb" Dec 05 01:24:22 crc kubenswrapper[4990]: E1205 01:24:22.823430 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="ovnkube-controller" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.823437 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="ovnkube-controller" Dec 05 01:24:22 crc kubenswrapper[4990]: E1205 01:24:22.823447 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="ovnkube-controller" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.823453 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="ovnkube-controller" Dec 05 01:24:22 crc kubenswrapper[4990]: E1205 01:24:22.823460 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7576a00e-b357-4dcf-849f-4c639425e6f4" containerName="extract-utilities" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.823465 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="7576a00e-b357-4dcf-849f-4c639425e6f4" containerName="extract-utilities" Dec 05 01:24:22 crc kubenswrapper[4990]: E1205 01:24:22.823506 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="kube-rbac-proxy-node" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.823512 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="kube-rbac-proxy-node" Dec 05 01:24:22 crc kubenswrapper[4990]: E1205 01:24:22.823519 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="kubecfg-setup" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.823526 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="kubecfg-setup" Dec 05 01:24:22 crc kubenswrapper[4990]: E1205 01:24:22.823535 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="nbdb" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.823541 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="nbdb" Dec 05 01:24:22 crc kubenswrapper[4990]: E1205 01:24:22.823551 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="northd" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.823557 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="northd" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.823685 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="ovnkube-controller" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.823694 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="ovnkube-controller" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.823702 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="sbdb" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.823708 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="northd" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.823717 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="ovnkube-controller" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.823744 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="kube-rbac-proxy-node" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.823753 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="nbdb" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.823760 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="ovn-controller" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.823767 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="ovn-acl-logging" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.823775 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="kube-rbac-proxy-ovn-metrics" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.823781 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="7576a00e-b357-4dcf-849f-4c639425e6f4" containerName="registry-server" Dec 05 01:24:22 crc kubenswrapper[4990]: E1205 01:24:22.823900 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="ovnkube-controller" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.823908 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="ovnkube-controller" Dec 05 01:24:22 crc kubenswrapper[4990]: E1205 01:24:22.823923 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="ovnkube-controller" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.823929 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="ovnkube-controller" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.824056 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="ovnkube-controller" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.824265 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" containerName="ovnkube-controller" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.825943 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.907653 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-kubelet\") pod \"3eeec70d-1c5c-434e-90bc-95620458151c\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.907697 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-run-systemd\") pod \"3eeec70d-1c5c-434e-90bc-95620458151c\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.907734 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8ffk\" (UniqueName: \"kubernetes.io/projected/3eeec70d-1c5c-434e-90bc-95620458151c-kube-api-access-s8ffk\") pod \"3eeec70d-1c5c-434e-90bc-95620458151c\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.907758 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-run-netns\") pod \"3eeec70d-1c5c-434e-90bc-95620458151c\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.907784 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-node-log\") pod \"3eeec70d-1c5c-434e-90bc-95620458151c\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.907806 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-run-ovn-kubernetes\") pod \"3eeec70d-1c5c-434e-90bc-95620458151c\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.907806 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "3eeec70d-1c5c-434e-90bc-95620458151c" (UID: "3eeec70d-1c5c-434e-90bc-95620458151c"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.907837 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "3eeec70d-1c5c-434e-90bc-95620458151c" (UID: "3eeec70d-1c5c-434e-90bc-95620458151c"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.907830 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-run-ovn\") pod \"3eeec70d-1c5c-434e-90bc-95620458151c\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.907880 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-node-log" (OuterVolumeSpecName: "node-log") pod "3eeec70d-1c5c-434e-90bc-95620458151c" (UID: "3eeec70d-1c5c-434e-90bc-95620458151c"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.907884 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "3eeec70d-1c5c-434e-90bc-95620458151c" (UID: "3eeec70d-1c5c-434e-90bc-95620458151c"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.907919 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "3eeec70d-1c5c-434e-90bc-95620458151c" (UID: "3eeec70d-1c5c-434e-90bc-95620458151c"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.907939 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-log-socket\") pod \"3eeec70d-1c5c-434e-90bc-95620458151c\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.907979 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3eeec70d-1c5c-434e-90bc-95620458151c-env-overrides\") pod \"3eeec70d-1c5c-434e-90bc-95620458151c\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.908005 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-run-openvswitch\") pod \"3eeec70d-1c5c-434e-90bc-95620458151c\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.908020 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-log-socket" (OuterVolumeSpecName: "log-socket") pod "3eeec70d-1c5c-434e-90bc-95620458151c" (UID: "3eeec70d-1c5c-434e-90bc-95620458151c"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.908039 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3eeec70d-1c5c-434e-90bc-95620458151c-ovnkube-script-lib\") pod \"3eeec70d-1c5c-434e-90bc-95620458151c\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.908064 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3eeec70d-1c5c-434e-90bc-95620458151c-ovnkube-config\") pod \"3eeec70d-1c5c-434e-90bc-95620458151c\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.908114 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-var-lib-openvswitch\") pod \"3eeec70d-1c5c-434e-90bc-95620458151c\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.908135 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-etc-openvswitch\") pod \"3eeec70d-1c5c-434e-90bc-95620458151c\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.908160 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-slash\") pod \"3eeec70d-1c5c-434e-90bc-95620458151c\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.908207 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-systemd-units\") pod \"3eeec70d-1c5c-434e-90bc-95620458151c\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.908232 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-cni-bin\") pod \"3eeec70d-1c5c-434e-90bc-95620458151c\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.908252 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"3eeec70d-1c5c-434e-90bc-95620458151c\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.908290 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-cni-netd\") pod \"3eeec70d-1c5c-434e-90bc-95620458151c\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.908311 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3eeec70d-1c5c-434e-90bc-95620458151c-ovn-node-metrics-cert\") pod \"3eeec70d-1c5c-434e-90bc-95620458151c\" (UID: \"3eeec70d-1c5c-434e-90bc-95620458151c\") " Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.908385 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eeec70d-1c5c-434e-90bc-95620458151c-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "3eeec70d-1c5c-434e-90bc-95620458151c" (UID: "3eeec70d-1c5c-434e-90bc-95620458151c"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.908413 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "3eeec70d-1c5c-434e-90bc-95620458151c" (UID: "3eeec70d-1c5c-434e-90bc-95620458151c"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.908431 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "3eeec70d-1c5c-434e-90bc-95620458151c" (UID: "3eeec70d-1c5c-434e-90bc-95620458151c"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.908605 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "3eeec70d-1c5c-434e-90bc-95620458151c" (UID: "3eeec70d-1c5c-434e-90bc-95620458151c"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.908663 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eeec70d-1c5c-434e-90bc-95620458151c-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "3eeec70d-1c5c-434e-90bc-95620458151c" (UID: "3eeec70d-1c5c-434e-90bc-95620458151c"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.908672 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "3eeec70d-1c5c-434e-90bc-95620458151c" (UID: "3eeec70d-1c5c-434e-90bc-95620458151c"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.908697 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-slash" (OuterVolumeSpecName: "host-slash") pod "3eeec70d-1c5c-434e-90bc-95620458151c" (UID: "3eeec70d-1c5c-434e-90bc-95620458151c"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.908702 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "3eeec70d-1c5c-434e-90bc-95620458151c" (UID: "3eeec70d-1c5c-434e-90bc-95620458151c"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.908724 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "3eeec70d-1c5c-434e-90bc-95620458151c" (UID: "3eeec70d-1c5c-434e-90bc-95620458151c"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.908764 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "3eeec70d-1c5c-434e-90bc-95620458151c" (UID: "3eeec70d-1c5c-434e-90bc-95620458151c"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.908780 4990 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.908834 4990 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.908951 4990 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.908976 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eeec70d-1c5c-434e-90bc-95620458151c-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "3eeec70d-1c5c-434e-90bc-95620458151c" (UID: "3eeec70d-1c5c-434e-90bc-95620458151c"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.909057 4990 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.909091 4990 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-node-log\") on node \"crc\" DevicePath \"\"" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.909117 4990 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.909143 4990 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.909184 4990 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-log-socket\") on node \"crc\" DevicePath \"\"" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.909220 4990 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3eeec70d-1c5c-434e-90bc-95620458151c-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.909241 4990 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.909266 4990 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3eeec70d-1c5c-434e-90bc-95620458151c-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.909293 4990 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.909312 4990 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-slash\") on node \"crc\" DevicePath \"\"" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.913376 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3eeec70d-1c5c-434e-90bc-95620458151c-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "3eeec70d-1c5c-434e-90bc-95620458151c" (UID: "3eeec70d-1c5c-434e-90bc-95620458151c"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.913924 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eeec70d-1c5c-434e-90bc-95620458151c-kube-api-access-s8ffk" (OuterVolumeSpecName: "kube-api-access-s8ffk") pod "3eeec70d-1c5c-434e-90bc-95620458151c" (UID: "3eeec70d-1c5c-434e-90bc-95620458151c"). InnerVolumeSpecName "kube-api-access-s8ffk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:24:22 crc kubenswrapper[4990]: I1205 01:24:22.933177 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "3eeec70d-1c5c-434e-90bc-95620458151c" (UID: "3eeec70d-1c5c-434e-90bc-95620458151c"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.010622 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-node-log\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.010661 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-host-slash\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.010686 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-run-openvswitch\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.010702 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-run-ovn\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.010724 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-host-kubelet\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.010955 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-ovnkube-script-lib\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.011041 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrck7\" (UniqueName: \"kubernetes.io/projected/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-kube-api-access-rrck7\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.011062 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-ovnkube-config\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.011090 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-host-run-netns\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.011107 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-log-socket\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.011121 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-host-cni-bin\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.011205 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-run-systemd\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.011229 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-var-lib-openvswitch\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.011247 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-env-overrides\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.011266 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-host-run-ovn-kubernetes\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.011294 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-host-cni-netd\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.011334 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.011371 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-systemd-units\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.011388 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-ovn-node-metrics-cert\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.011403 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-etc-openvswitch\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.011454 4990 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.011468 4990 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.011511 4990 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3eeec70d-1c5c-434e-90bc-95620458151c-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.011524 4990 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.011537 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8ffk\" (UniqueName: \"kubernetes.io/projected/3eeec70d-1c5c-434e-90bc-95620458151c-kube-api-access-s8ffk\") on node \"crc\" DevicePath \"\"" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.011695 4990 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3eeec70d-1c5c-434e-90bc-95620458151c-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.011726 4990 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3eeec70d-1c5c-434e-90bc-95620458151c-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.112776 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-host-cni-netd\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.112871 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.112915 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-host-cni-netd\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.112933 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-ovn-node-metrics-cert\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.112989 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-systemd-units\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.113018 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-etc-openvswitch\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.113048 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-node-log\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.113083 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-host-slash\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.113105 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-run-openvswitch\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.113107 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-systemd-units\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.113152 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-run-ovn\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.113128 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-run-ovn\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.113175 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-node-log\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.113185 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-etc-openvswitch\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.113210 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-host-kubelet\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.113111 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.113227 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-host-slash\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.113190 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-host-kubelet\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.113248 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-run-openvswitch\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.113327 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-ovnkube-script-lib\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.113365 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrck7\" (UniqueName: \"kubernetes.io/projected/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-kube-api-access-rrck7\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.113403 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-ovnkube-config\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.113436 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-host-run-netns\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.113473 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-log-socket\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.113555 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-host-cni-bin\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.113645 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-run-systemd\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.113687 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-var-lib-openvswitch\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.113721 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-env-overrides\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.113750 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-host-run-ovn-kubernetes\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.113867 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-host-run-ovn-kubernetes\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.113869 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-log-socket\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.113917 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-host-cni-bin\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.113943 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-host-run-netns\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.113960 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-run-systemd\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.113975 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-var-lib-openvswitch\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.114076 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-ovnkube-script-lib\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.114946 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-env-overrides\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.114956 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-ovnkube-config\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.118223 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-ovn-node-metrics-cert\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.145339 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrck7\" (UniqueName: \"kubernetes.io/projected/9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b-kube-api-access-rrck7\") pod \"ovnkube-node-5fc5t\" (UID: \"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.188134 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4w6g9_3eeec70d-1c5c-434e-90bc-95620458151c/ovnkube-controller/3.log" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.191436 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4w6g9_3eeec70d-1c5c-434e-90bc-95620458151c/ovn-acl-logging/0.log" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.192157 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4w6g9_3eeec70d-1c5c-434e-90bc-95620458151c/ovn-controller/0.log" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.192942 4990 generic.go:334] "Generic (PLEG): container finished" podID="3eeec70d-1c5c-434e-90bc-95620458151c" containerID="f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500" exitCode=0 Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.192995 4990 generic.go:334] "Generic (PLEG): container finished" podID="3eeec70d-1c5c-434e-90bc-95620458151c" containerID="de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d" exitCode=0 Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193021 4990 generic.go:334] "Generic (PLEG): container finished" podID="3eeec70d-1c5c-434e-90bc-95620458151c" containerID="27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691" exitCode=0 Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193027 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" event={"ID":"3eeec70d-1c5c-434e-90bc-95620458151c","Type":"ContainerDied","Data":"f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193041 4990 generic.go:334] "Generic (PLEG): container finished" podID="3eeec70d-1c5c-434e-90bc-95620458151c" containerID="cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e" exitCode=0 Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193061 4990 generic.go:334] "Generic (PLEG): container finished" podID="3eeec70d-1c5c-434e-90bc-95620458151c" containerID="1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896" exitCode=0 Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193074 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" event={"ID":"3eeec70d-1c5c-434e-90bc-95620458151c","Type":"ContainerDied","Data":"de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193090 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" event={"ID":"3eeec70d-1c5c-434e-90bc-95620458151c","Type":"ContainerDied","Data":"27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193077 4990 generic.go:334] "Generic (PLEG): container finished" podID="3eeec70d-1c5c-434e-90bc-95620458151c" containerID="97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05" exitCode=0 Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193133 4990 generic.go:334] "Generic (PLEG): container finished" podID="3eeec70d-1c5c-434e-90bc-95620458151c" containerID="d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b" exitCode=143 Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193097 4990 scope.go:117] "RemoveContainer" containerID="f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193172 4990 generic.go:334] "Generic (PLEG): container finished" podID="3eeec70d-1c5c-434e-90bc-95620458151c" containerID="758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38" exitCode=143 Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193101 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" event={"ID":"3eeec70d-1c5c-434e-90bc-95620458151c","Type":"ContainerDied","Data":"cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193083 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193345 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" event={"ID":"3eeec70d-1c5c-434e-90bc-95620458151c","Type":"ContainerDied","Data":"1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193406 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" event={"ID":"3eeec70d-1c5c-434e-90bc-95620458151c","Type":"ContainerDied","Data":"97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193429 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193457 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193474 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193520 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193532 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193545 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193556 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193567 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193578 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193596 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" event={"ID":"3eeec70d-1c5c-434e-90bc-95620458151c","Type":"ContainerDied","Data":"d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193616 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193630 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193640 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193651 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193662 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193673 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193684 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193694 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193705 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193718 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193762 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" event={"ID":"3eeec70d-1c5c-434e-90bc-95620458151c","Type":"ContainerDied","Data":"758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193787 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193806 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193821 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193832 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193843 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193854 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193867 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193881 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193895 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193908 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193929 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4w6g9" event={"ID":"3eeec70d-1c5c-434e-90bc-95620458151c","Type":"ContainerDied","Data":"6c7c225b4bf95d5db30b60660c1195bdb1d9576ebe954015a100962731b2df0f"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193949 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193963 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193975 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.193989 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.194010 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.194024 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.194038 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.194052 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.194064 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.194077 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.204668 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rdhk7_c4914133-b0cd-4d12-84d5-c99379e2324a/kube-multus/2.log" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.205462 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rdhk7_c4914133-b0cd-4d12-84d5-c99379e2324a/kube-multus/1.log" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.205605 4990 generic.go:334] "Generic (PLEG): container finished" podID="c4914133-b0cd-4d12-84d5-c99379e2324a" containerID="81c1369f091e1a32060c7351e6bfa8a258a7bc0cb73ee05d789c98d4a4a69887" exitCode=2 Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.205653 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rdhk7" event={"ID":"c4914133-b0cd-4d12-84d5-c99379e2324a","Type":"ContainerDied","Data":"81c1369f091e1a32060c7351e6bfa8a258a7bc0cb73ee05d789c98d4a4a69887"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.205683 4990 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cb934aa0cb867865c3cc63541e39eaa488349656fdbb8df851d66001a971602"} Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.206451 4990 scope.go:117] "RemoveContainer" containerID="81c1369f091e1a32060c7351e6bfa8a258a7bc0cb73ee05d789c98d4a4a69887" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.248765 4990 scope.go:117] "RemoveContainer" containerID="ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.258412 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4w6g9"] Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.266165 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4w6g9"] Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.278123 4990 scope.go:117] "RemoveContainer" containerID="de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.304345 4990 scope.go:117] "RemoveContainer" containerID="27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.327859 4990 scope.go:117] "RemoveContainer" containerID="cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.355208 4990 scope.go:117] "RemoveContainer" containerID="1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.386090 4990 scope.go:117] "RemoveContainer" containerID="97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.407083 4990 scope.go:117] "RemoveContainer" containerID="d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.425337 4990 scope.go:117] "RemoveContainer" containerID="758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.444220 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.445460 4990 scope.go:117] "RemoveContainer" containerID="02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.473501 4990 scope.go:117] "RemoveContainer" containerID="f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500" Dec 05 01:24:23 crc kubenswrapper[4990]: E1205 01:24:23.474004 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500\": container with ID starting with f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500 not found: ID does not exist" containerID="f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.474062 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500"} err="failed to get container status \"f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500\": rpc error: code = NotFound desc = could not find container \"f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500\": container with ID starting with f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500 not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.474088 4990 scope.go:117] "RemoveContainer" containerID="ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b" Dec 05 01:24:23 crc kubenswrapper[4990]: E1205 01:24:23.474514 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b\": container with ID starting with ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b not found: ID does not exist" containerID="ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.474538 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b"} err="failed to get container status \"ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b\": rpc error: code = NotFound desc = could not find container \"ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b\": container with ID starting with ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.474550 4990 scope.go:117] "RemoveContainer" containerID="de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d" Dec 05 01:24:23 crc kubenswrapper[4990]: E1205 01:24:23.475095 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d\": container with ID starting with de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d not found: ID does not exist" containerID="de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.475132 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d"} err="failed to get container status \"de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d\": rpc error: code = NotFound desc = could not find container \"de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d\": container with ID starting with de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.475147 4990 scope.go:117] "RemoveContainer" containerID="27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691" Dec 05 01:24:23 crc kubenswrapper[4990]: E1205 01:24:23.475533 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691\": container with ID starting with 27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691 not found: ID does not exist" containerID="27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.475553 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691"} err="failed to get container status \"27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691\": rpc error: code = NotFound desc = could not find container \"27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691\": container with ID starting with 27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691 not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.475585 4990 scope.go:117] "RemoveContainer" containerID="cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e" Dec 05 01:24:23 crc kubenswrapper[4990]: E1205 01:24:23.475894 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e\": container with ID starting with cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e not found: ID does not exist" containerID="cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.475913 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e"} err="failed to get container status \"cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e\": rpc error: code = NotFound desc = could not find container \"cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e\": container with ID starting with cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.475926 4990 scope.go:117] "RemoveContainer" containerID="1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896" Dec 05 01:24:23 crc kubenswrapper[4990]: E1205 01:24:23.476243 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896\": container with ID starting with 1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896 not found: ID does not exist" containerID="1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.476263 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896"} err="failed to get container status \"1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896\": rpc error: code = NotFound desc = could not find container \"1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896\": container with ID starting with 1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896 not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.476293 4990 scope.go:117] "RemoveContainer" containerID="97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05" Dec 05 01:24:23 crc kubenswrapper[4990]: E1205 01:24:23.476555 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05\": container with ID starting with 97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05 not found: ID does not exist" containerID="97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.476586 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05"} err="failed to get container status \"97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05\": rpc error: code = NotFound desc = could not find container \"97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05\": container with ID starting with 97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05 not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.476605 4990 scope.go:117] "RemoveContainer" containerID="d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b" Dec 05 01:24:23 crc kubenswrapper[4990]: E1205 01:24:23.477090 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b\": container with ID starting with d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b not found: ID does not exist" containerID="d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.477149 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b"} err="failed to get container status \"d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b\": rpc error: code = NotFound desc = could not find container \"d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b\": container with ID starting with d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.477170 4990 scope.go:117] "RemoveContainer" containerID="758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38" Dec 05 01:24:23 crc kubenswrapper[4990]: E1205 01:24:23.477465 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38\": container with ID starting with 758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38 not found: ID does not exist" containerID="758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.477502 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38"} err="failed to get container status \"758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38\": rpc error: code = NotFound desc = could not find container \"758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38\": container with ID starting with 758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38 not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.477514 4990 scope.go:117] "RemoveContainer" containerID="02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5" Dec 05 01:24:23 crc kubenswrapper[4990]: E1205 01:24:23.477912 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\": container with ID starting with 02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5 not found: ID does not exist" containerID="02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.477930 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5"} err="failed to get container status \"02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\": rpc error: code = NotFound desc = could not find container \"02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\": container with ID starting with 02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5 not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.478000 4990 scope.go:117] "RemoveContainer" containerID="f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.478300 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500"} err="failed to get container status \"f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500\": rpc error: code = NotFound desc = could not find container \"f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500\": container with ID starting with f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500 not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.478316 4990 scope.go:117] "RemoveContainer" containerID="ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b" Dec 05 01:24:23 crc kubenswrapper[4990]: W1205 01:24:23.478751 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ef9ff6e_2032_4cb7_8f4f_3030dc0ea52b.slice/crio-14412e7700edfdb4937f1a46675d75605d45f6ec5385851e2fdf4a247596c535 WatchSource:0}: Error finding container 14412e7700edfdb4937f1a46675d75605d45f6ec5385851e2fdf4a247596c535: Status 404 returned error can't find the container with id 14412e7700edfdb4937f1a46675d75605d45f6ec5385851e2fdf4a247596c535 Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.478822 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b"} err="failed to get container status \"ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b\": rpc error: code = NotFound desc = could not find container \"ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b\": container with ID starting with ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.478835 4990 scope.go:117] "RemoveContainer" containerID="de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.479149 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d"} err="failed to get container status \"de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d\": rpc error: code = NotFound desc = could not find container \"de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d\": container with ID starting with de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.479161 4990 scope.go:117] "RemoveContainer" containerID="27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.479389 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691"} err="failed to get container status \"27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691\": rpc error: code = NotFound desc = could not find container \"27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691\": container with ID starting with 27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691 not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.479403 4990 scope.go:117] "RemoveContainer" containerID="cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.479696 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e"} err="failed to get container status \"cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e\": rpc error: code = NotFound desc = could not find container \"cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e\": container with ID starting with cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.479710 4990 scope.go:117] "RemoveContainer" containerID="1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.479984 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896"} err="failed to get container status \"1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896\": rpc error: code = NotFound desc = could not find container \"1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896\": container with ID starting with 1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896 not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.479995 4990 scope.go:117] "RemoveContainer" containerID="97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.480157 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05"} err="failed to get container status \"97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05\": rpc error: code = NotFound desc = could not find container \"97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05\": container with ID starting with 97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05 not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.480168 4990 scope.go:117] "RemoveContainer" containerID="d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.480410 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b"} err="failed to get container status \"d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b\": rpc error: code = NotFound desc = could not find container \"d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b\": container with ID starting with d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.480423 4990 scope.go:117] "RemoveContainer" containerID="758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.480684 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38"} err="failed to get container status \"758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38\": rpc error: code = NotFound desc = could not find container \"758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38\": container with ID starting with 758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38 not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.480696 4990 scope.go:117] "RemoveContainer" containerID="02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.480931 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5"} err="failed to get container status \"02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\": rpc error: code = NotFound desc = could not find container \"02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\": container with ID starting with 02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5 not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.480957 4990 scope.go:117] "RemoveContainer" containerID="f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.481540 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500"} err="failed to get container status \"f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500\": rpc error: code = NotFound desc = could not find container \"f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500\": container with ID starting with f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500 not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.481556 4990 scope.go:117] "RemoveContainer" containerID="ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.481795 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b"} err="failed to get container status \"ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b\": rpc error: code = NotFound desc = could not find container \"ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b\": container with ID starting with ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.481809 4990 scope.go:117] "RemoveContainer" containerID="de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.483435 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d"} err="failed to get container status \"de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d\": rpc error: code = NotFound desc = could not find container \"de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d\": container with ID starting with de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.483456 4990 scope.go:117] "RemoveContainer" containerID="27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.483758 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691"} err="failed to get container status \"27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691\": rpc error: code = NotFound desc = could not find container \"27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691\": container with ID starting with 27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691 not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.483772 4990 scope.go:117] "RemoveContainer" containerID="cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.489153 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e"} err="failed to get container status \"cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e\": rpc error: code = NotFound desc = could not find container \"cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e\": container with ID starting with cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.489204 4990 scope.go:117] "RemoveContainer" containerID="1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.489613 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896"} err="failed to get container status \"1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896\": rpc error: code = NotFound desc = could not find container \"1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896\": container with ID starting with 1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896 not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.489656 4990 scope.go:117] "RemoveContainer" containerID="97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.490197 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05"} err="failed to get container status \"97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05\": rpc error: code = NotFound desc = could not find container \"97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05\": container with ID starting with 97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05 not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.490225 4990 scope.go:117] "RemoveContainer" containerID="d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.490473 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b"} err="failed to get container status \"d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b\": rpc error: code = NotFound desc = could not find container \"d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b\": container with ID starting with d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.490513 4990 scope.go:117] "RemoveContainer" containerID="758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.490958 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38"} err="failed to get container status \"758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38\": rpc error: code = NotFound desc = could not find container \"758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38\": container with ID starting with 758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38 not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.490985 4990 scope.go:117] "RemoveContainer" containerID="02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.491279 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5"} err="failed to get container status \"02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\": rpc error: code = NotFound desc = could not find container \"02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\": container with ID starting with 02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5 not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.491314 4990 scope.go:117] "RemoveContainer" containerID="f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.491579 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500"} err="failed to get container status \"f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500\": rpc error: code = NotFound desc = could not find container \"f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500\": container with ID starting with f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500 not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.491608 4990 scope.go:117] "RemoveContainer" containerID="ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.492011 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b"} err="failed to get container status \"ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b\": rpc error: code = NotFound desc = could not find container \"ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b\": container with ID starting with ebf97da3959fdefd4e4e190892e6799d2e843eb4910de5bf7e56ca172b5f3b0b not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.492038 4990 scope.go:117] "RemoveContainer" containerID="de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.492354 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d"} err="failed to get container status \"de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d\": rpc error: code = NotFound desc = could not find container \"de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d\": container with ID starting with de73994a6bce2aae8bc8ae1cc19e686fe6c20be5177801e7a41d9171b6c7574d not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.492435 4990 scope.go:117] "RemoveContainer" containerID="27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.492887 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691"} err="failed to get container status \"27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691\": rpc error: code = NotFound desc = could not find container \"27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691\": container with ID starting with 27bdc05fd337161d590c8031239d9ae913e751b13b14fe8774986327d04bb691 not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.492916 4990 scope.go:117] "RemoveContainer" containerID="cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.493159 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e"} err="failed to get container status \"cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e\": rpc error: code = NotFound desc = could not find container \"cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e\": container with ID starting with cdaf39969b40548da1ccc8fd8b19c80acb0ee993faa80ff7248ba74e429c693e not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.493217 4990 scope.go:117] "RemoveContainer" containerID="1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.493537 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896"} err="failed to get container status \"1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896\": rpc error: code = NotFound desc = could not find container \"1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896\": container with ID starting with 1d162edc82c09fcfd17f42b9df17605447eb376872a039f771e4809652b3a896 not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.493583 4990 scope.go:117] "RemoveContainer" containerID="97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.494089 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05"} err="failed to get container status \"97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05\": rpc error: code = NotFound desc = could not find container \"97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05\": container with ID starting with 97e10a7969aed3834bf52a4df81244d5f84584416c08a390d1b5b5f51a359b05 not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.494152 4990 scope.go:117] "RemoveContainer" containerID="d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.494449 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b"} err="failed to get container status \"d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b\": rpc error: code = NotFound desc = could not find container \"d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b\": container with ID starting with d415b532f57419267645069caaaafc418e3270da8934023d08d9cc88b86b393b not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.494525 4990 scope.go:117] "RemoveContainer" containerID="758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.494783 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38"} err="failed to get container status \"758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38\": rpc error: code = NotFound desc = could not find container \"758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38\": container with ID starting with 758eadafef32600784360e0a193cf4d9be91ef1bb8d6853792a3ac856bf3fa38 not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.494835 4990 scope.go:117] "RemoveContainer" containerID="02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.495111 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5"} err="failed to get container status \"02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\": rpc error: code = NotFound desc = could not find container \"02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5\": container with ID starting with 02416f0d917f28792a821744118c085b5bc376152412b29ee70f7764a19fddc5 not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.495153 4990 scope.go:117] "RemoveContainer" containerID="f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.495421 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500"} err="failed to get container status \"f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500\": rpc error: code = NotFound desc = could not find container \"f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500\": container with ID starting with f423785e6224763b2ae1d2196577cdb67ee1eb871cee87d7f3e268d2a0783500 not found: ID does not exist" Dec 05 01:24:23 crc kubenswrapper[4990]: I1205 01:24:23.944204 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eeec70d-1c5c-434e-90bc-95620458151c" path="/var/lib/kubelet/pods/3eeec70d-1c5c-434e-90bc-95620458151c/volumes" Dec 05 01:24:24 crc kubenswrapper[4990]: I1205 01:24:24.219537 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rdhk7_c4914133-b0cd-4d12-84d5-c99379e2324a/kube-multus/2.log" Dec 05 01:24:24 crc kubenswrapper[4990]: I1205 01:24:24.220622 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rdhk7_c4914133-b0cd-4d12-84d5-c99379e2324a/kube-multus/1.log" Dec 05 01:24:24 crc kubenswrapper[4990]: I1205 01:24:24.220841 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rdhk7" event={"ID":"c4914133-b0cd-4d12-84d5-c99379e2324a","Type":"ContainerStarted","Data":"7537d87eebdc52a6381d632e9bed8bbb8f60f09531e52bf0a4e80881c9a79209"} Dec 05 01:24:24 crc kubenswrapper[4990]: I1205 01:24:24.223266 4990 generic.go:334] "Generic (PLEG): container finished" podID="9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b" containerID="eb771070e7769ee02fe967eef38100c59eff468646c1d2b55e795794cc038e3c" exitCode=0 Dec 05 01:24:24 crc kubenswrapper[4990]: I1205 01:24:24.223377 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" event={"ID":"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b","Type":"ContainerDied","Data":"eb771070e7769ee02fe967eef38100c59eff468646c1d2b55e795794cc038e3c"} Dec 05 01:24:24 crc kubenswrapper[4990]: I1205 01:24:24.223455 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" event={"ID":"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b","Type":"ContainerStarted","Data":"14412e7700edfdb4937f1a46675d75605d45f6ec5385851e2fdf4a247596c535"} Dec 05 01:24:25 crc kubenswrapper[4990]: I1205 01:24:25.237740 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" event={"ID":"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b","Type":"ContainerStarted","Data":"fd08d75d9a87ff54b7210b40a0f00f1798e22105cca0f123ca2bdc1f9da92afc"} Dec 05 01:24:25 crc kubenswrapper[4990]: I1205 01:24:25.238639 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" event={"ID":"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b","Type":"ContainerStarted","Data":"fb793b6b8755d4f522f7948f9cb265528c237a2060ad2e99f957ea4145a6d167"} Dec 05 01:24:25 crc kubenswrapper[4990]: I1205 01:24:25.238680 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" event={"ID":"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b","Type":"ContainerStarted","Data":"6d022bed83989d2b74b50d4e49f4c16d38404b9deb3b5e09a5ea10e433513406"} Dec 05 01:24:25 crc kubenswrapper[4990]: I1205 01:24:25.238704 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" event={"ID":"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b","Type":"ContainerStarted","Data":"a6fce794b6bd4c330ccdbb9da744bc6aeeb9231f5c281c9c09ab9bbafd9122fc"} Dec 05 01:24:25 crc kubenswrapper[4990]: I1205 01:24:25.238725 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" event={"ID":"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b","Type":"ContainerStarted","Data":"6f70ab5802afb4a846a98c3bf7ee354502ca36406853939dd3277f8b51ed3c39"} Dec 05 01:24:25 crc kubenswrapper[4990]: I1205 01:24:25.238742 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" event={"ID":"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b","Type":"ContainerStarted","Data":"956dff8770052389cbd67321a8d46602e75b7685c94aac7249eb5a1c31ff77b4"} Dec 05 01:24:28 crc kubenswrapper[4990]: I1205 01:24:28.262040 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" event={"ID":"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b","Type":"ContainerStarted","Data":"b2acf4517c4fbf3a6dd8d7097cda76fa5e79597743e8c4af38e716247c1de6b6"} Dec 05 01:24:28 crc kubenswrapper[4990]: I1205 01:24:28.465663 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lzddt"] Dec 05 01:24:28 crc kubenswrapper[4990]: I1205 01:24:28.466862 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lzddt" Dec 05 01:24:28 crc kubenswrapper[4990]: I1205 01:24:28.486458 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcfad2d2-11cc-45a8-95a0-336d2ab92e47-utilities\") pod \"community-operators-lzddt\" (UID: \"fcfad2d2-11cc-45a8-95a0-336d2ab92e47\") " pod="openshift-marketplace/community-operators-lzddt" Dec 05 01:24:28 crc kubenswrapper[4990]: I1205 01:24:28.486568 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcfad2d2-11cc-45a8-95a0-336d2ab92e47-catalog-content\") pod \"community-operators-lzddt\" (UID: \"fcfad2d2-11cc-45a8-95a0-336d2ab92e47\") " pod="openshift-marketplace/community-operators-lzddt" Dec 05 01:24:28 crc kubenswrapper[4990]: I1205 01:24:28.486645 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m489m\" (UniqueName: \"kubernetes.io/projected/fcfad2d2-11cc-45a8-95a0-336d2ab92e47-kube-api-access-m489m\") pod \"community-operators-lzddt\" (UID: \"fcfad2d2-11cc-45a8-95a0-336d2ab92e47\") " pod="openshift-marketplace/community-operators-lzddt" Dec 05 01:24:28 crc kubenswrapper[4990]: I1205 01:24:28.587993 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcfad2d2-11cc-45a8-95a0-336d2ab92e47-utilities\") pod \"community-operators-lzddt\" (UID: \"fcfad2d2-11cc-45a8-95a0-336d2ab92e47\") " pod="openshift-marketplace/community-operators-lzddt" Dec 05 01:24:28 crc kubenswrapper[4990]: I1205 01:24:28.588052 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcfad2d2-11cc-45a8-95a0-336d2ab92e47-catalog-content\") pod \"community-operators-lzddt\" (UID: \"fcfad2d2-11cc-45a8-95a0-336d2ab92e47\") " pod="openshift-marketplace/community-operators-lzddt" Dec 05 01:24:28 crc kubenswrapper[4990]: I1205 01:24:28.588287 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m489m\" (UniqueName: \"kubernetes.io/projected/fcfad2d2-11cc-45a8-95a0-336d2ab92e47-kube-api-access-m489m\") pod \"community-operators-lzddt\" (UID: \"fcfad2d2-11cc-45a8-95a0-336d2ab92e47\") " pod="openshift-marketplace/community-operators-lzddt" Dec 05 01:24:28 crc kubenswrapper[4990]: I1205 01:24:28.588672 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcfad2d2-11cc-45a8-95a0-336d2ab92e47-catalog-content\") pod \"community-operators-lzddt\" (UID: \"fcfad2d2-11cc-45a8-95a0-336d2ab92e47\") " pod="openshift-marketplace/community-operators-lzddt" Dec 05 01:24:28 crc kubenswrapper[4990]: I1205 01:24:28.588935 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcfad2d2-11cc-45a8-95a0-336d2ab92e47-utilities\") pod \"community-operators-lzddt\" (UID: \"fcfad2d2-11cc-45a8-95a0-336d2ab92e47\") " pod="openshift-marketplace/community-operators-lzddt" Dec 05 01:24:28 crc kubenswrapper[4990]: I1205 01:24:28.616132 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m489m\" (UniqueName: \"kubernetes.io/projected/fcfad2d2-11cc-45a8-95a0-336d2ab92e47-kube-api-access-m489m\") pod \"community-operators-lzddt\" (UID: \"fcfad2d2-11cc-45a8-95a0-336d2ab92e47\") " pod="openshift-marketplace/community-operators-lzddt" Dec 05 01:24:28 crc kubenswrapper[4990]: I1205 01:24:28.787825 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lzddt" Dec 05 01:24:28 crc kubenswrapper[4990]: E1205 01:24:28.838644 4990 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-lzddt_openshift-marketplace_fcfad2d2-11cc-45a8-95a0-336d2ab92e47_0(d94ed91236e453910c9af722255f8fff982bb5e3d01de48d634a35fe145b1fe4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 05 01:24:28 crc kubenswrapper[4990]: E1205 01:24:28.838738 4990 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-lzddt_openshift-marketplace_fcfad2d2-11cc-45a8-95a0-336d2ab92e47_0(d94ed91236e453910c9af722255f8fff982bb5e3d01de48d634a35fe145b1fe4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-lzddt" Dec 05 01:24:28 crc kubenswrapper[4990]: E1205 01:24:28.838767 4990 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-lzddt_openshift-marketplace_fcfad2d2-11cc-45a8-95a0-336d2ab92e47_0(d94ed91236e453910c9af722255f8fff982bb5e3d01de48d634a35fe145b1fe4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-lzddt" Dec 05 01:24:28 crc kubenswrapper[4990]: E1205 01:24:28.838822 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"community-operators-lzddt_openshift-marketplace(fcfad2d2-11cc-45a8-95a0-336d2ab92e47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"community-operators-lzddt_openshift-marketplace(fcfad2d2-11cc-45a8-95a0-336d2ab92e47)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-lzddt_openshift-marketplace_fcfad2d2-11cc-45a8-95a0-336d2ab92e47_0(d94ed91236e453910c9af722255f8fff982bb5e3d01de48d634a35fe145b1fe4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/community-operators-lzddt" podUID="fcfad2d2-11cc-45a8-95a0-336d2ab92e47" Dec 05 01:24:31 crc kubenswrapper[4990]: I1205 01:24:31.169067 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lzddt"] Dec 05 01:24:31 crc kubenswrapper[4990]: I1205 01:24:31.170235 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lzddt" Dec 05 01:24:31 crc kubenswrapper[4990]: I1205 01:24:31.170960 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lzddt" Dec 05 01:24:31 crc kubenswrapper[4990]: E1205 01:24:31.194471 4990 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-lzddt_openshift-marketplace_fcfad2d2-11cc-45a8-95a0-336d2ab92e47_0(f65a5fa56f6673d815b704d0334a9e14f943ac2cc65ba546ffd1ee0a7c8a3e18): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 05 01:24:31 crc kubenswrapper[4990]: E1205 01:24:31.195174 4990 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-lzddt_openshift-marketplace_fcfad2d2-11cc-45a8-95a0-336d2ab92e47_0(f65a5fa56f6673d815b704d0334a9e14f943ac2cc65ba546ffd1ee0a7c8a3e18): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-lzddt" Dec 05 01:24:31 crc kubenswrapper[4990]: E1205 01:24:31.195211 4990 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-lzddt_openshift-marketplace_fcfad2d2-11cc-45a8-95a0-336d2ab92e47_0(f65a5fa56f6673d815b704d0334a9e14f943ac2cc65ba546ffd1ee0a7c8a3e18): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-lzddt" Dec 05 01:24:31 crc kubenswrapper[4990]: E1205 01:24:31.195298 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"community-operators-lzddt_openshift-marketplace(fcfad2d2-11cc-45a8-95a0-336d2ab92e47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"community-operators-lzddt_openshift-marketplace(fcfad2d2-11cc-45a8-95a0-336d2ab92e47)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-lzddt_openshift-marketplace_fcfad2d2-11cc-45a8-95a0-336d2ab92e47_0(f65a5fa56f6673d815b704d0334a9e14f943ac2cc65ba546ffd1ee0a7c8a3e18): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/community-operators-lzddt" podUID="fcfad2d2-11cc-45a8-95a0-336d2ab92e47" Dec 05 01:24:31 crc kubenswrapper[4990]: I1205 01:24:31.283513 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" event={"ID":"9ef9ff6e-2032-4cb7-8f4f-3030dc0ea52b","Type":"ContainerStarted","Data":"d67f8e021ded2db3d4b0b1d07a6dbce37f7d7bcafec8e4ad28052da4db84bb95"} Dec 05 01:24:31 crc kubenswrapper[4990]: I1205 01:24:31.283883 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:31 crc kubenswrapper[4990]: I1205 01:24:31.284148 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:31 crc kubenswrapper[4990]: I1205 01:24:31.284184 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:31 crc kubenswrapper[4990]: I1205 01:24:31.309219 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:31 crc kubenswrapper[4990]: I1205 01:24:31.312746 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" podStartSLOduration=9.312731888 podStartE2EDuration="9.312731888s" podCreationTimestamp="2025-12-05 01:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:24:31.307062697 +0000 UTC m=+969.683278048" watchObservedRunningTime="2025-12-05 01:24:31.312731888 +0000 UTC m=+969.688947249" Dec 05 01:24:31 crc kubenswrapper[4990]: I1205 01:24:31.313530 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:32 crc kubenswrapper[4990]: I1205 01:24:32.023603 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-2fm8f"] Dec 05 01:24:32 crc kubenswrapper[4990]: I1205 01:24:32.024655 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2fm8f" Dec 05 01:24:32 crc kubenswrapper[4990]: I1205 01:24:32.027274 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Dec 05 01:24:32 crc kubenswrapper[4990]: I1205 01:24:32.027454 4990 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-fmsg9" Dec 05 01:24:32 crc kubenswrapper[4990]: I1205 01:24:32.027808 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Dec 05 01:24:32 crc kubenswrapper[4990]: I1205 01:24:32.028803 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Dec 05 01:24:32 crc kubenswrapper[4990]: I1205 01:24:32.034324 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-2fm8f"] Dec 05 01:24:32 crc kubenswrapper[4990]: I1205 01:24:32.139948 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgh49\" (UniqueName: \"kubernetes.io/projected/9f76b0c0-ac55-4113-b58b-6e283f3b4b13-kube-api-access-sgh49\") pod \"crc-storage-crc-2fm8f\" (UID: \"9f76b0c0-ac55-4113-b58b-6e283f3b4b13\") " pod="crc-storage/crc-storage-crc-2fm8f" Dec 05 01:24:32 crc kubenswrapper[4990]: I1205 01:24:32.140304 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/9f76b0c0-ac55-4113-b58b-6e283f3b4b13-crc-storage\") pod \"crc-storage-crc-2fm8f\" (UID: \"9f76b0c0-ac55-4113-b58b-6e283f3b4b13\") " pod="crc-storage/crc-storage-crc-2fm8f" Dec 05 01:24:32 crc kubenswrapper[4990]: I1205 01:24:32.140517 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/9f76b0c0-ac55-4113-b58b-6e283f3b4b13-node-mnt\") pod \"crc-storage-crc-2fm8f\" (UID: \"9f76b0c0-ac55-4113-b58b-6e283f3b4b13\") " pod="crc-storage/crc-storage-crc-2fm8f" Dec 05 01:24:32 crc kubenswrapper[4990]: I1205 01:24:32.241908 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgh49\" (UniqueName: \"kubernetes.io/projected/9f76b0c0-ac55-4113-b58b-6e283f3b4b13-kube-api-access-sgh49\") pod \"crc-storage-crc-2fm8f\" (UID: \"9f76b0c0-ac55-4113-b58b-6e283f3b4b13\") " pod="crc-storage/crc-storage-crc-2fm8f" Dec 05 01:24:32 crc kubenswrapper[4990]: I1205 01:24:32.242022 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/9f76b0c0-ac55-4113-b58b-6e283f3b4b13-crc-storage\") pod \"crc-storage-crc-2fm8f\" (UID: \"9f76b0c0-ac55-4113-b58b-6e283f3b4b13\") " pod="crc-storage/crc-storage-crc-2fm8f" Dec 05 01:24:32 crc kubenswrapper[4990]: I1205 01:24:32.242101 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/9f76b0c0-ac55-4113-b58b-6e283f3b4b13-node-mnt\") pod \"crc-storage-crc-2fm8f\" (UID: \"9f76b0c0-ac55-4113-b58b-6e283f3b4b13\") " pod="crc-storage/crc-storage-crc-2fm8f" Dec 05 01:24:32 crc kubenswrapper[4990]: I1205 01:24:32.242526 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/9f76b0c0-ac55-4113-b58b-6e283f3b4b13-node-mnt\") pod \"crc-storage-crc-2fm8f\" (UID: \"9f76b0c0-ac55-4113-b58b-6e283f3b4b13\") " pod="crc-storage/crc-storage-crc-2fm8f" Dec 05 01:24:32 crc kubenswrapper[4990]: I1205 01:24:32.243678 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/9f76b0c0-ac55-4113-b58b-6e283f3b4b13-crc-storage\") pod \"crc-storage-crc-2fm8f\" (UID: \"9f76b0c0-ac55-4113-b58b-6e283f3b4b13\") " pod="crc-storage/crc-storage-crc-2fm8f" Dec 05 01:24:32 crc kubenswrapper[4990]: I1205 01:24:32.267547 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgh49\" (UniqueName: \"kubernetes.io/projected/9f76b0c0-ac55-4113-b58b-6e283f3b4b13-kube-api-access-sgh49\") pod \"crc-storage-crc-2fm8f\" (UID: \"9f76b0c0-ac55-4113-b58b-6e283f3b4b13\") " pod="crc-storage/crc-storage-crc-2fm8f" Dec 05 01:24:32 crc kubenswrapper[4990]: I1205 01:24:32.340721 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2fm8f" Dec 05 01:24:32 crc kubenswrapper[4990]: E1205 01:24:32.383464 4990 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2fm8f_crc-storage_9f76b0c0-ac55-4113-b58b-6e283f3b4b13_0(55a19954d4d4d3b4d50d4839723743b63d512a53c712be46e450c43d469b5e37): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 05 01:24:32 crc kubenswrapper[4990]: E1205 01:24:32.383551 4990 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2fm8f_crc-storage_9f76b0c0-ac55-4113-b58b-6e283f3b4b13_0(55a19954d4d4d3b4d50d4839723743b63d512a53c712be46e450c43d469b5e37): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2fm8f" Dec 05 01:24:32 crc kubenswrapper[4990]: E1205 01:24:32.383581 4990 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2fm8f_crc-storage_9f76b0c0-ac55-4113-b58b-6e283f3b4b13_0(55a19954d4d4d3b4d50d4839723743b63d512a53c712be46e450c43d469b5e37): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2fm8f" Dec 05 01:24:32 crc kubenswrapper[4990]: E1205 01:24:32.383634 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-2fm8f_crc-storage(9f76b0c0-ac55-4113-b58b-6e283f3b4b13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-2fm8f_crc-storage(9f76b0c0-ac55-4113-b58b-6e283f3b4b13)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2fm8f_crc-storage_9f76b0c0-ac55-4113-b58b-6e283f3b4b13_0(55a19954d4d4d3b4d50d4839723743b63d512a53c712be46e450c43d469b5e37): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-2fm8f" podUID="9f76b0c0-ac55-4113-b58b-6e283f3b4b13" Dec 05 01:24:33 crc kubenswrapper[4990]: I1205 01:24:33.299196 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2fm8f" Dec 05 01:24:33 crc kubenswrapper[4990]: I1205 01:24:33.299843 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2fm8f" Dec 05 01:24:33 crc kubenswrapper[4990]: E1205 01:24:33.343182 4990 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2fm8f_crc-storage_9f76b0c0-ac55-4113-b58b-6e283f3b4b13_0(d6d50285c4786d9b1ea48cddb58ea3692f14e7f309c51f45ace7c27ec6f017c7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 05 01:24:33 crc kubenswrapper[4990]: E1205 01:24:33.343281 4990 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2fm8f_crc-storage_9f76b0c0-ac55-4113-b58b-6e283f3b4b13_0(d6d50285c4786d9b1ea48cddb58ea3692f14e7f309c51f45ace7c27ec6f017c7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2fm8f" Dec 05 01:24:33 crc kubenswrapper[4990]: E1205 01:24:33.343325 4990 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2fm8f_crc-storage_9f76b0c0-ac55-4113-b58b-6e283f3b4b13_0(d6d50285c4786d9b1ea48cddb58ea3692f14e7f309c51f45ace7c27ec6f017c7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2fm8f" Dec 05 01:24:33 crc kubenswrapper[4990]: E1205 01:24:33.343400 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-2fm8f_crc-storage(9f76b0c0-ac55-4113-b58b-6e283f3b4b13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-2fm8f_crc-storage(9f76b0c0-ac55-4113-b58b-6e283f3b4b13)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2fm8f_crc-storage_9f76b0c0-ac55-4113-b58b-6e283f3b4b13_0(d6d50285c4786d9b1ea48cddb58ea3692f14e7f309c51f45ace7c27ec6f017c7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-2fm8f" podUID="9f76b0c0-ac55-4113-b58b-6e283f3b4b13" Dec 05 01:24:44 crc kubenswrapper[4990]: I1205 01:24:44.929890 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2fm8f" Dec 05 01:24:44 crc kubenswrapper[4990]: I1205 01:24:44.931477 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2fm8f" Dec 05 01:24:45 crc kubenswrapper[4990]: I1205 01:24:45.178301 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-2fm8f"] Dec 05 01:24:45 crc kubenswrapper[4990]: I1205 01:24:45.380637 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-2fm8f" event={"ID":"9f76b0c0-ac55-4113-b58b-6e283f3b4b13","Type":"ContainerStarted","Data":"ac7280f0e09b0172d8df0982334bf2a743988fff50bc18f73aa5f300fe7426ab"} Dec 05 01:24:46 crc kubenswrapper[4990]: I1205 01:24:46.930413 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lzddt" Dec 05 01:24:46 crc kubenswrapper[4990]: I1205 01:24:46.932070 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lzddt" Dec 05 01:24:47 crc kubenswrapper[4990]: I1205 01:24:47.355124 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lzddt"] Dec 05 01:24:47 crc kubenswrapper[4990]: W1205 01:24:47.361965 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcfad2d2_11cc_45a8_95a0_336d2ab92e47.slice/crio-b4a6a5cbcc55e446577db826c179f0b43688f613f6423b7b4fd218f408262ce2 WatchSource:0}: Error finding container b4a6a5cbcc55e446577db826c179f0b43688f613f6423b7b4fd218f408262ce2: Status 404 returned error can't find the container with id b4a6a5cbcc55e446577db826c179f0b43688f613f6423b7b4fd218f408262ce2 Dec 05 01:24:47 crc kubenswrapper[4990]: I1205 01:24:47.396893 4990 generic.go:334] "Generic (PLEG): container finished" podID="9f76b0c0-ac55-4113-b58b-6e283f3b4b13" containerID="9ddd58214275f6a52cfa41a42374b8c343015285f84aba06efe703146121bba3" exitCode=0 Dec 05 01:24:47 crc kubenswrapper[4990]: I1205 01:24:47.397033 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-2fm8f" event={"ID":"9f76b0c0-ac55-4113-b58b-6e283f3b4b13","Type":"ContainerDied","Data":"9ddd58214275f6a52cfa41a42374b8c343015285f84aba06efe703146121bba3"} Dec 05 01:24:47 crc kubenswrapper[4990]: I1205 01:24:47.398853 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lzddt" event={"ID":"fcfad2d2-11cc-45a8-95a0-336d2ab92e47","Type":"ContainerStarted","Data":"b4a6a5cbcc55e446577db826c179f0b43688f613f6423b7b4fd218f408262ce2"} Dec 05 01:24:48 crc kubenswrapper[4990]: I1205 01:24:48.409127 4990 generic.go:334] "Generic (PLEG): container finished" podID="fcfad2d2-11cc-45a8-95a0-336d2ab92e47" containerID="866108320a2ed0a26f2d392d124e4853c13224dc4ec37469b79d0ac650ed529d" exitCode=0 Dec 05 01:24:48 crc kubenswrapper[4990]: I1205 01:24:48.409225 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lzddt" event={"ID":"fcfad2d2-11cc-45a8-95a0-336d2ab92e47","Type":"ContainerDied","Data":"866108320a2ed0a26f2d392d124e4853c13224dc4ec37469b79d0ac650ed529d"} Dec 05 01:24:48 crc kubenswrapper[4990]: I1205 01:24:48.740892 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2fm8f" Dec 05 01:24:48 crc kubenswrapper[4990]: I1205 01:24:48.784850 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/9f76b0c0-ac55-4113-b58b-6e283f3b4b13-crc-storage\") pod \"9f76b0c0-ac55-4113-b58b-6e283f3b4b13\" (UID: \"9f76b0c0-ac55-4113-b58b-6e283f3b4b13\") " Dec 05 01:24:48 crc kubenswrapper[4990]: I1205 01:24:48.785049 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/9f76b0c0-ac55-4113-b58b-6e283f3b4b13-node-mnt\") pod \"9f76b0c0-ac55-4113-b58b-6e283f3b4b13\" (UID: \"9f76b0c0-ac55-4113-b58b-6e283f3b4b13\") " Dec 05 01:24:48 crc kubenswrapper[4990]: I1205 01:24:48.785109 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgh49\" (UniqueName: \"kubernetes.io/projected/9f76b0c0-ac55-4113-b58b-6e283f3b4b13-kube-api-access-sgh49\") pod \"9f76b0c0-ac55-4113-b58b-6e283f3b4b13\" (UID: \"9f76b0c0-ac55-4113-b58b-6e283f3b4b13\") " Dec 05 01:24:48 crc kubenswrapper[4990]: I1205 01:24:48.785261 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f76b0c0-ac55-4113-b58b-6e283f3b4b13-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "9f76b0c0-ac55-4113-b58b-6e283f3b4b13" (UID: "9f76b0c0-ac55-4113-b58b-6e283f3b4b13"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:24:48 crc kubenswrapper[4990]: I1205 01:24:48.785649 4990 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/9f76b0c0-ac55-4113-b58b-6e283f3b4b13-node-mnt\") on node \"crc\" DevicePath \"\"" Dec 05 01:24:48 crc kubenswrapper[4990]: I1205 01:24:48.793868 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f76b0c0-ac55-4113-b58b-6e283f3b4b13-kube-api-access-sgh49" (OuterVolumeSpecName: "kube-api-access-sgh49") pod "9f76b0c0-ac55-4113-b58b-6e283f3b4b13" (UID: "9f76b0c0-ac55-4113-b58b-6e283f3b4b13"). InnerVolumeSpecName "kube-api-access-sgh49". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:24:48 crc kubenswrapper[4990]: I1205 01:24:48.809592 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f76b0c0-ac55-4113-b58b-6e283f3b4b13-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "9f76b0c0-ac55-4113-b58b-6e283f3b4b13" (UID: "9f76b0c0-ac55-4113-b58b-6e283f3b4b13"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:24:48 crc kubenswrapper[4990]: I1205 01:24:48.887740 4990 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/9f76b0c0-ac55-4113-b58b-6e283f3b4b13-crc-storage\") on node \"crc\" DevicePath \"\"" Dec 05 01:24:48 crc kubenswrapper[4990]: I1205 01:24:48.888400 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgh49\" (UniqueName: \"kubernetes.io/projected/9f76b0c0-ac55-4113-b58b-6e283f3b4b13-kube-api-access-sgh49\") on node \"crc\" DevicePath \"\"" Dec 05 01:24:49 crc kubenswrapper[4990]: I1205 01:24:49.417455 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2fm8f" Dec 05 01:24:49 crc kubenswrapper[4990]: I1205 01:24:49.417396 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-2fm8f" event={"ID":"9f76b0c0-ac55-4113-b58b-6e283f3b4b13","Type":"ContainerDied","Data":"ac7280f0e09b0172d8df0982334bf2a743988fff50bc18f73aa5f300fe7426ab"} Dec 05 01:24:49 crc kubenswrapper[4990]: I1205 01:24:49.417535 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac7280f0e09b0172d8df0982334bf2a743988fff50bc18f73aa5f300fe7426ab" Dec 05 01:24:49 crc kubenswrapper[4990]: I1205 01:24:49.419743 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lzddt" event={"ID":"fcfad2d2-11cc-45a8-95a0-336d2ab92e47","Type":"ContainerStarted","Data":"74d4a79c03129d02053f34ee1616109486d92d9d05d590bae0f5123536347f4f"} Dec 05 01:24:50 crc kubenswrapper[4990]: I1205 01:24:50.430274 4990 generic.go:334] "Generic (PLEG): container finished" podID="fcfad2d2-11cc-45a8-95a0-336d2ab92e47" containerID="74d4a79c03129d02053f34ee1616109486d92d9d05d590bae0f5123536347f4f" exitCode=0 Dec 05 01:24:50 crc kubenswrapper[4990]: I1205 01:24:50.430353 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lzddt" event={"ID":"fcfad2d2-11cc-45a8-95a0-336d2ab92e47","Type":"ContainerDied","Data":"74d4a79c03129d02053f34ee1616109486d92d9d05d590bae0f5123536347f4f"} Dec 05 01:24:51 crc kubenswrapper[4990]: I1205 01:24:51.440624 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lzddt" event={"ID":"fcfad2d2-11cc-45a8-95a0-336d2ab92e47","Type":"ContainerStarted","Data":"4cebe845a57a2acf22bcfb0a17df245a54c9847943baa523a88a6261e7d6c6bd"} Dec 05 01:24:51 crc kubenswrapper[4990]: I1205 01:24:51.474659 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lzddt" podStartSLOduration=20.966673743 podStartE2EDuration="23.474630665s" podCreationTimestamp="2025-12-05 01:24:28 +0000 UTC" firstStartedPulling="2025-12-05 01:24:48.413089619 +0000 UTC m=+986.789305020" lastFinishedPulling="2025-12-05 01:24:50.921046541 +0000 UTC m=+989.297261942" observedRunningTime="2025-12-05 01:24:51.473571565 +0000 UTC m=+989.849786956" watchObservedRunningTime="2025-12-05 01:24:51.474630665 +0000 UTC m=+989.850846056" Dec 05 01:24:51 crc kubenswrapper[4990]: I1205 01:24:51.824390 4990 patch_prober.go:28] interesting pod/machine-config-daemon-zxlh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 01:24:51 crc kubenswrapper[4990]: I1205 01:24:51.824520 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 01:24:53 crc kubenswrapper[4990]: I1205 01:24:53.490178 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5fc5t" Dec 05 01:24:55 crc kubenswrapper[4990]: I1205 01:24:55.272278 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8hb6d"] Dec 05 01:24:55 crc kubenswrapper[4990]: E1205 01:24:55.272615 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f76b0c0-ac55-4113-b58b-6e283f3b4b13" containerName="storage" Dec 05 01:24:55 crc kubenswrapper[4990]: I1205 01:24:55.272636 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f76b0c0-ac55-4113-b58b-6e283f3b4b13" containerName="storage" Dec 05 01:24:55 crc kubenswrapper[4990]: I1205 01:24:55.272838 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f76b0c0-ac55-4113-b58b-6e283f3b4b13" containerName="storage" Dec 05 01:24:55 crc kubenswrapper[4990]: I1205 01:24:55.274308 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8hb6d" Dec 05 01:24:55 crc kubenswrapper[4990]: I1205 01:24:55.289992 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8hb6d"] Dec 05 01:24:55 crc kubenswrapper[4990]: I1205 01:24:55.384966 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7h4l\" (UniqueName: \"kubernetes.io/projected/83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8-kube-api-access-l7h4l\") pod \"certified-operators-8hb6d\" (UID: \"83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8\") " pod="openshift-marketplace/certified-operators-8hb6d" Dec 05 01:24:55 crc kubenswrapper[4990]: I1205 01:24:55.385015 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8-catalog-content\") pod \"certified-operators-8hb6d\" (UID: \"83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8\") " pod="openshift-marketplace/certified-operators-8hb6d" Dec 05 01:24:55 crc kubenswrapper[4990]: I1205 01:24:55.385040 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8-utilities\") pod \"certified-operators-8hb6d\" (UID: \"83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8\") " pod="openshift-marketplace/certified-operators-8hb6d" Dec 05 01:24:55 crc kubenswrapper[4990]: I1205 01:24:55.485635 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8-catalog-content\") pod \"certified-operators-8hb6d\" (UID: \"83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8\") " pod="openshift-marketplace/certified-operators-8hb6d" Dec 05 01:24:55 crc kubenswrapper[4990]: I1205 01:24:55.485693 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8-utilities\") pod \"certified-operators-8hb6d\" (UID: \"83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8\") " pod="openshift-marketplace/certified-operators-8hb6d" Dec 05 01:24:55 crc kubenswrapper[4990]: I1205 01:24:55.485761 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7h4l\" (UniqueName: \"kubernetes.io/projected/83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8-kube-api-access-l7h4l\") pod \"certified-operators-8hb6d\" (UID: \"83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8\") " pod="openshift-marketplace/certified-operators-8hb6d" Dec 05 01:24:55 crc kubenswrapper[4990]: I1205 01:24:55.486318 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8-utilities\") pod \"certified-operators-8hb6d\" (UID: \"83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8\") " pod="openshift-marketplace/certified-operators-8hb6d" Dec 05 01:24:55 crc kubenswrapper[4990]: I1205 01:24:55.486747 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8-catalog-content\") pod \"certified-operators-8hb6d\" (UID: \"83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8\") " pod="openshift-marketplace/certified-operators-8hb6d" Dec 05 01:24:55 crc kubenswrapper[4990]: I1205 01:24:55.514199 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7h4l\" (UniqueName: \"kubernetes.io/projected/83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8-kube-api-access-l7h4l\") pod \"certified-operators-8hb6d\" (UID: \"83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8\") " pod="openshift-marketplace/certified-operators-8hb6d" Dec 05 01:24:55 crc kubenswrapper[4990]: I1205 01:24:55.606068 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8hb6d" Dec 05 01:24:55 crc kubenswrapper[4990]: I1205 01:24:55.868082 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8hb6d"] Dec 05 01:24:55 crc kubenswrapper[4990]: W1205 01:24:55.884222 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83e0e10a_c7cf_4c11_a6a7_90d69a5f74d8.slice/crio-04ca725711f79a39b1f94cde1ee026f7e28da9fd7702815577be74ee21efc25c WatchSource:0}: Error finding container 04ca725711f79a39b1f94cde1ee026f7e28da9fd7702815577be74ee21efc25c: Status 404 returned error can't find the container with id 04ca725711f79a39b1f94cde1ee026f7e28da9fd7702815577be74ee21efc25c Dec 05 01:24:56 crc kubenswrapper[4990]: I1205 01:24:56.484619 4990 generic.go:334] "Generic (PLEG): container finished" podID="83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8" containerID="d7fe366275527297b61897bdf7b1c8c8d4140e8a83e4ec2170d44a495105a1f3" exitCode=0 Dec 05 01:24:56 crc kubenswrapper[4990]: I1205 01:24:56.484718 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8hb6d" event={"ID":"83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8","Type":"ContainerDied","Data":"d7fe366275527297b61897bdf7b1c8c8d4140e8a83e4ec2170d44a495105a1f3"} Dec 05 01:24:56 crc kubenswrapper[4990]: I1205 01:24:56.485017 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8hb6d" event={"ID":"83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8","Type":"ContainerStarted","Data":"04ca725711f79a39b1f94cde1ee026f7e28da9fd7702815577be74ee21efc25c"} Dec 05 01:24:57 crc kubenswrapper[4990]: I1205 01:24:57.494335 4990 generic.go:334] "Generic (PLEG): container finished" podID="83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8" containerID="f55bb0dcb285a8409d053a367aa419e041ad453b23901f60a1590b6b0eb66aae" exitCode=0 Dec 05 01:24:57 crc kubenswrapper[4990]: I1205 01:24:57.494446 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8hb6d" event={"ID":"83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8","Type":"ContainerDied","Data":"f55bb0dcb285a8409d053a367aa419e041ad453b23901f60a1590b6b0eb66aae"} Dec 05 01:24:57 crc kubenswrapper[4990]: I1205 01:24:57.505261 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z"] Dec 05 01:24:57 crc kubenswrapper[4990]: I1205 01:24:57.506997 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z" Dec 05 01:24:57 crc kubenswrapper[4990]: I1205 01:24:57.512851 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 05 01:24:57 crc kubenswrapper[4990]: I1205 01:24:57.521087 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z"] Dec 05 01:24:57 crc kubenswrapper[4990]: I1205 01:24:57.610339 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cc34209-d840-4863-b29e-98d64972e9c7-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z\" (UID: \"3cc34209-d840-4863-b29e-98d64972e9c7\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z" Dec 05 01:24:57 crc kubenswrapper[4990]: I1205 01:24:57.610600 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cc34209-d840-4863-b29e-98d64972e9c7-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z\" (UID: \"3cc34209-d840-4863-b29e-98d64972e9c7\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z" Dec 05 01:24:57 crc kubenswrapper[4990]: I1205 01:24:57.610713 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbckj\" (UniqueName: \"kubernetes.io/projected/3cc34209-d840-4863-b29e-98d64972e9c7-kube-api-access-zbckj\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z\" (UID: \"3cc34209-d840-4863-b29e-98d64972e9c7\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z" Dec 05 01:24:57 crc kubenswrapper[4990]: I1205 01:24:57.712467 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cc34209-d840-4863-b29e-98d64972e9c7-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z\" (UID: \"3cc34209-d840-4863-b29e-98d64972e9c7\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z" Dec 05 01:24:57 crc kubenswrapper[4990]: I1205 01:24:57.712635 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cc34209-d840-4863-b29e-98d64972e9c7-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z\" (UID: \"3cc34209-d840-4863-b29e-98d64972e9c7\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z" Dec 05 01:24:57 crc kubenswrapper[4990]: I1205 01:24:57.712786 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbckj\" (UniqueName: \"kubernetes.io/projected/3cc34209-d840-4863-b29e-98d64972e9c7-kube-api-access-zbckj\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z\" (UID: \"3cc34209-d840-4863-b29e-98d64972e9c7\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z" Dec 05 01:24:57 crc kubenswrapper[4990]: I1205 01:24:57.713521 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cc34209-d840-4863-b29e-98d64972e9c7-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z\" (UID: \"3cc34209-d840-4863-b29e-98d64972e9c7\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z" Dec 05 01:24:57 crc kubenswrapper[4990]: I1205 01:24:57.713956 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cc34209-d840-4863-b29e-98d64972e9c7-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z\" (UID: \"3cc34209-d840-4863-b29e-98d64972e9c7\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z" Dec 05 01:24:57 crc kubenswrapper[4990]: I1205 01:24:57.748860 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbckj\" (UniqueName: \"kubernetes.io/projected/3cc34209-d840-4863-b29e-98d64972e9c7-kube-api-access-zbckj\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z\" (UID: \"3cc34209-d840-4863-b29e-98d64972e9c7\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z" Dec 05 01:24:57 crc kubenswrapper[4990]: I1205 01:24:57.837939 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z" Dec 05 01:24:58 crc kubenswrapper[4990]: I1205 01:24:58.121274 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z"] Dec 05 01:24:58 crc kubenswrapper[4990]: W1205 01:24:58.128573 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3cc34209_d840_4863_b29e_98d64972e9c7.slice/crio-21f49b3136540658eb9fa18ac6baf157246426fabdc31510a20694464dddea7e WatchSource:0}: Error finding container 21f49b3136540658eb9fa18ac6baf157246426fabdc31510a20694464dddea7e: Status 404 returned error can't find the container with id 21f49b3136540658eb9fa18ac6baf157246426fabdc31510a20694464dddea7e Dec 05 01:24:58 crc kubenswrapper[4990]: I1205 01:24:58.511467 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8hb6d" event={"ID":"83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8","Type":"ContainerStarted","Data":"4ef62aa20ed6342098683a94891833498754502acd5e8afc95e69bb71e94ad09"} Dec 05 01:24:58 crc kubenswrapper[4990]: I1205 01:24:58.514799 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z" event={"ID":"3cc34209-d840-4863-b29e-98d64972e9c7","Type":"ContainerStarted","Data":"14b43e0d13e4551c41f8703eb921d7e9f176fda62714db689d86b390b083cc2d"} Dec 05 01:24:58 crc kubenswrapper[4990]: I1205 01:24:58.514853 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z" event={"ID":"3cc34209-d840-4863-b29e-98d64972e9c7","Type":"ContainerStarted","Data":"21f49b3136540658eb9fa18ac6baf157246426fabdc31510a20694464dddea7e"} Dec 05 01:24:58 crc kubenswrapper[4990]: I1205 01:24:58.542199 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8hb6d" podStartSLOduration=2.055482723 podStartE2EDuration="3.542168856s" podCreationTimestamp="2025-12-05 01:24:55 +0000 UTC" firstStartedPulling="2025-12-05 01:24:56.485967348 +0000 UTC m=+994.862182709" lastFinishedPulling="2025-12-05 01:24:57.972653441 +0000 UTC m=+996.348868842" observedRunningTime="2025-12-05 01:24:58.540332414 +0000 UTC m=+996.916547815" watchObservedRunningTime="2025-12-05 01:24:58.542168856 +0000 UTC m=+996.918384257" Dec 05 01:24:58 crc kubenswrapper[4990]: I1205 01:24:58.788288 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lzddt" Dec 05 01:24:58 crc kubenswrapper[4990]: I1205 01:24:58.788332 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lzddt" Dec 05 01:24:58 crc kubenswrapper[4990]: I1205 01:24:58.843619 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lzddt" Dec 05 01:24:59 crc kubenswrapper[4990]: I1205 01:24:59.522850 4990 generic.go:334] "Generic (PLEG): container finished" podID="3cc34209-d840-4863-b29e-98d64972e9c7" containerID="14b43e0d13e4551c41f8703eb921d7e9f176fda62714db689d86b390b083cc2d" exitCode=0 Dec 05 01:24:59 crc kubenswrapper[4990]: I1205 01:24:59.522952 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z" event={"ID":"3cc34209-d840-4863-b29e-98d64972e9c7","Type":"ContainerDied","Data":"14b43e0d13e4551c41f8703eb921d7e9f176fda62714db689d86b390b083cc2d"} Dec 05 01:24:59 crc kubenswrapper[4990]: I1205 01:24:59.586476 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lzddt" Dec 05 01:25:01 crc kubenswrapper[4990]: I1205 01:25:01.538961 4990 generic.go:334] "Generic (PLEG): container finished" podID="3cc34209-d840-4863-b29e-98d64972e9c7" containerID="5fa66d63ad6971fa035eeefdd72bbcd26750fb0cfd61461c261732d0b9ea0abf" exitCode=0 Dec 05 01:25:01 crc kubenswrapper[4990]: I1205 01:25:01.539014 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z" event={"ID":"3cc34209-d840-4863-b29e-98d64972e9c7","Type":"ContainerDied","Data":"5fa66d63ad6971fa035eeefdd72bbcd26750fb0cfd61461c261732d0b9ea0abf"} Dec 05 01:25:02 crc kubenswrapper[4990]: I1205 01:25:02.556173 4990 generic.go:334] "Generic (PLEG): container finished" podID="3cc34209-d840-4863-b29e-98d64972e9c7" containerID="ab7cdc93b6aea0bf7fa9051a988e7a4082c3f5a45351140ecc6349edc379bdb5" exitCode=0 Dec 05 01:25:02 crc kubenswrapper[4990]: I1205 01:25:02.556231 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z" event={"ID":"3cc34209-d840-4863-b29e-98d64972e9c7","Type":"ContainerDied","Data":"ab7cdc93b6aea0bf7fa9051a988e7a4082c3f5a45351140ecc6349edc379bdb5"} Dec 05 01:25:03 crc kubenswrapper[4990]: I1205 01:25:03.642936 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lzddt"] Dec 05 01:25:03 crc kubenswrapper[4990]: I1205 01:25:03.644266 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lzddt" podUID="fcfad2d2-11cc-45a8-95a0-336d2ab92e47" containerName="registry-server" containerID="cri-o://4cebe845a57a2acf22bcfb0a17df245a54c9847943baa523a88a6261e7d6c6bd" gracePeriod=2 Dec 05 01:25:03 crc kubenswrapper[4990]: I1205 01:25:03.872630 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z" Dec 05 01:25:03 crc kubenswrapper[4990]: I1205 01:25:03.898819 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cc34209-d840-4863-b29e-98d64972e9c7-util\") pod \"3cc34209-d840-4863-b29e-98d64972e9c7\" (UID: \"3cc34209-d840-4863-b29e-98d64972e9c7\") " Dec 05 01:25:03 crc kubenswrapper[4990]: I1205 01:25:03.911920 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cc34209-d840-4863-b29e-98d64972e9c7-util" (OuterVolumeSpecName: "util") pod "3cc34209-d840-4863-b29e-98d64972e9c7" (UID: "3cc34209-d840-4863-b29e-98d64972e9c7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:25:03 crc kubenswrapper[4990]: I1205 01:25:03.999657 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cc34209-d840-4863-b29e-98d64972e9c7-bundle\") pod \"3cc34209-d840-4863-b29e-98d64972e9c7\" (UID: \"3cc34209-d840-4863-b29e-98d64972e9c7\") " Dec 05 01:25:03 crc kubenswrapper[4990]: I1205 01:25:03.999746 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbckj\" (UniqueName: \"kubernetes.io/projected/3cc34209-d840-4863-b29e-98d64972e9c7-kube-api-access-zbckj\") pod \"3cc34209-d840-4863-b29e-98d64972e9c7\" (UID: \"3cc34209-d840-4863-b29e-98d64972e9c7\") " Dec 05 01:25:04 crc kubenswrapper[4990]: I1205 01:25:04.000012 4990 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cc34209-d840-4863-b29e-98d64972e9c7-util\") on node \"crc\" DevicePath \"\"" Dec 05 01:25:04 crc kubenswrapper[4990]: I1205 01:25:04.000761 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cc34209-d840-4863-b29e-98d64972e9c7-bundle" (OuterVolumeSpecName: "bundle") pod "3cc34209-d840-4863-b29e-98d64972e9c7" (UID: "3cc34209-d840-4863-b29e-98d64972e9c7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:25:04 crc kubenswrapper[4990]: I1205 01:25:04.006193 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cc34209-d840-4863-b29e-98d64972e9c7-kube-api-access-zbckj" (OuterVolumeSpecName: "kube-api-access-zbckj") pod "3cc34209-d840-4863-b29e-98d64972e9c7" (UID: "3cc34209-d840-4863-b29e-98d64972e9c7"). InnerVolumeSpecName "kube-api-access-zbckj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:25:04 crc kubenswrapper[4990]: I1205 01:25:04.101067 4990 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cc34209-d840-4863-b29e-98d64972e9c7-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:25:04 crc kubenswrapper[4990]: I1205 01:25:04.101102 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbckj\" (UniqueName: \"kubernetes.io/projected/3cc34209-d840-4863-b29e-98d64972e9c7-kube-api-access-zbckj\") on node \"crc\" DevicePath \"\"" Dec 05 01:25:04 crc kubenswrapper[4990]: I1205 01:25:04.572110 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lzddt" Dec 05 01:25:04 crc kubenswrapper[4990]: I1205 01:25:04.574474 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z" Dec 05 01:25:04 crc kubenswrapper[4990]: I1205 01:25:04.574506 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z" event={"ID":"3cc34209-d840-4863-b29e-98d64972e9c7","Type":"ContainerDied","Data":"21f49b3136540658eb9fa18ac6baf157246426fabdc31510a20694464dddea7e"} Dec 05 01:25:04 crc kubenswrapper[4990]: I1205 01:25:04.574552 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21f49b3136540658eb9fa18ac6baf157246426fabdc31510a20694464dddea7e" Dec 05 01:25:04 crc kubenswrapper[4990]: I1205 01:25:04.576739 4990 generic.go:334] "Generic (PLEG): container finished" podID="fcfad2d2-11cc-45a8-95a0-336d2ab92e47" containerID="4cebe845a57a2acf22bcfb0a17df245a54c9847943baa523a88a6261e7d6c6bd" exitCode=0 Dec 05 01:25:04 crc kubenswrapper[4990]: I1205 01:25:04.576780 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lzddt" event={"ID":"fcfad2d2-11cc-45a8-95a0-336d2ab92e47","Type":"ContainerDied","Data":"4cebe845a57a2acf22bcfb0a17df245a54c9847943baa523a88a6261e7d6c6bd"} Dec 05 01:25:04 crc kubenswrapper[4990]: I1205 01:25:04.576803 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lzddt" event={"ID":"fcfad2d2-11cc-45a8-95a0-336d2ab92e47","Type":"ContainerDied","Data":"b4a6a5cbcc55e446577db826c179f0b43688f613f6423b7b4fd218f408262ce2"} Dec 05 01:25:04 crc kubenswrapper[4990]: I1205 01:25:04.576822 4990 scope.go:117] "RemoveContainer" containerID="4cebe845a57a2acf22bcfb0a17df245a54c9847943baa523a88a6261e7d6c6bd" Dec 05 01:25:04 crc kubenswrapper[4990]: I1205 01:25:04.576949 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lzddt" Dec 05 01:25:04 crc kubenswrapper[4990]: I1205 01:25:04.604332 4990 scope.go:117] "RemoveContainer" containerID="74d4a79c03129d02053f34ee1616109486d92d9d05d590bae0f5123536347f4f" Dec 05 01:25:04 crc kubenswrapper[4990]: I1205 01:25:04.626570 4990 scope.go:117] "RemoveContainer" containerID="866108320a2ed0a26f2d392d124e4853c13224dc4ec37469b79d0ac650ed529d" Dec 05 01:25:04 crc kubenswrapper[4990]: I1205 01:25:04.647740 4990 scope.go:117] "RemoveContainer" containerID="4cebe845a57a2acf22bcfb0a17df245a54c9847943baa523a88a6261e7d6c6bd" Dec 05 01:25:04 crc kubenswrapper[4990]: E1205 01:25:04.649347 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cebe845a57a2acf22bcfb0a17df245a54c9847943baa523a88a6261e7d6c6bd\": container with ID starting with 4cebe845a57a2acf22bcfb0a17df245a54c9847943baa523a88a6261e7d6c6bd not found: ID does not exist" containerID="4cebe845a57a2acf22bcfb0a17df245a54c9847943baa523a88a6261e7d6c6bd" Dec 05 01:25:04 crc kubenswrapper[4990]: I1205 01:25:04.649395 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cebe845a57a2acf22bcfb0a17df245a54c9847943baa523a88a6261e7d6c6bd"} err="failed to get container status \"4cebe845a57a2acf22bcfb0a17df245a54c9847943baa523a88a6261e7d6c6bd\": rpc error: code = NotFound desc = could not find container \"4cebe845a57a2acf22bcfb0a17df245a54c9847943baa523a88a6261e7d6c6bd\": container with ID starting with 4cebe845a57a2acf22bcfb0a17df245a54c9847943baa523a88a6261e7d6c6bd not found: ID does not exist" Dec 05 01:25:04 crc kubenswrapper[4990]: I1205 01:25:04.649503 4990 scope.go:117] "RemoveContainer" containerID="74d4a79c03129d02053f34ee1616109486d92d9d05d590bae0f5123536347f4f" Dec 05 01:25:04 crc kubenswrapper[4990]: E1205 01:25:04.650000 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74d4a79c03129d02053f34ee1616109486d92d9d05d590bae0f5123536347f4f\": container with ID starting with 74d4a79c03129d02053f34ee1616109486d92d9d05d590bae0f5123536347f4f not found: ID does not exist" containerID="74d4a79c03129d02053f34ee1616109486d92d9d05d590bae0f5123536347f4f" Dec 05 01:25:04 crc kubenswrapper[4990]: I1205 01:25:04.650032 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74d4a79c03129d02053f34ee1616109486d92d9d05d590bae0f5123536347f4f"} err="failed to get container status \"74d4a79c03129d02053f34ee1616109486d92d9d05d590bae0f5123536347f4f\": rpc error: code = NotFound desc = could not find container \"74d4a79c03129d02053f34ee1616109486d92d9d05d590bae0f5123536347f4f\": container with ID starting with 74d4a79c03129d02053f34ee1616109486d92d9d05d590bae0f5123536347f4f not found: ID does not exist" Dec 05 01:25:04 crc kubenswrapper[4990]: I1205 01:25:04.650054 4990 scope.go:117] "RemoveContainer" containerID="866108320a2ed0a26f2d392d124e4853c13224dc4ec37469b79d0ac650ed529d" Dec 05 01:25:04 crc kubenswrapper[4990]: E1205 01:25:04.650350 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"866108320a2ed0a26f2d392d124e4853c13224dc4ec37469b79d0ac650ed529d\": container with ID starting with 866108320a2ed0a26f2d392d124e4853c13224dc4ec37469b79d0ac650ed529d not found: ID does not exist" containerID="866108320a2ed0a26f2d392d124e4853c13224dc4ec37469b79d0ac650ed529d" Dec 05 01:25:04 crc kubenswrapper[4990]: I1205 01:25:04.650389 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"866108320a2ed0a26f2d392d124e4853c13224dc4ec37469b79d0ac650ed529d"} err="failed to get container status \"866108320a2ed0a26f2d392d124e4853c13224dc4ec37469b79d0ac650ed529d\": rpc error: code = NotFound desc = could not find container \"866108320a2ed0a26f2d392d124e4853c13224dc4ec37469b79d0ac650ed529d\": container with ID starting with 866108320a2ed0a26f2d392d124e4853c13224dc4ec37469b79d0ac650ed529d not found: ID does not exist" Dec 05 01:25:04 crc kubenswrapper[4990]: I1205 01:25:04.708279 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m489m\" (UniqueName: \"kubernetes.io/projected/fcfad2d2-11cc-45a8-95a0-336d2ab92e47-kube-api-access-m489m\") pod \"fcfad2d2-11cc-45a8-95a0-336d2ab92e47\" (UID: \"fcfad2d2-11cc-45a8-95a0-336d2ab92e47\") " Dec 05 01:25:04 crc kubenswrapper[4990]: I1205 01:25:04.708395 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcfad2d2-11cc-45a8-95a0-336d2ab92e47-catalog-content\") pod \"fcfad2d2-11cc-45a8-95a0-336d2ab92e47\" (UID: \"fcfad2d2-11cc-45a8-95a0-336d2ab92e47\") " Dec 05 01:25:04 crc kubenswrapper[4990]: I1205 01:25:04.708579 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcfad2d2-11cc-45a8-95a0-336d2ab92e47-utilities\") pod \"fcfad2d2-11cc-45a8-95a0-336d2ab92e47\" (UID: \"fcfad2d2-11cc-45a8-95a0-336d2ab92e47\") " Dec 05 01:25:04 crc kubenswrapper[4990]: I1205 01:25:04.709646 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcfad2d2-11cc-45a8-95a0-336d2ab92e47-utilities" (OuterVolumeSpecName: "utilities") pod "fcfad2d2-11cc-45a8-95a0-336d2ab92e47" (UID: "fcfad2d2-11cc-45a8-95a0-336d2ab92e47"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:25:04 crc kubenswrapper[4990]: I1205 01:25:04.713712 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcfad2d2-11cc-45a8-95a0-336d2ab92e47-kube-api-access-m489m" (OuterVolumeSpecName: "kube-api-access-m489m") pod "fcfad2d2-11cc-45a8-95a0-336d2ab92e47" (UID: "fcfad2d2-11cc-45a8-95a0-336d2ab92e47"). InnerVolumeSpecName "kube-api-access-m489m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:25:04 crc kubenswrapper[4990]: I1205 01:25:04.789415 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcfad2d2-11cc-45a8-95a0-336d2ab92e47-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fcfad2d2-11cc-45a8-95a0-336d2ab92e47" (UID: "fcfad2d2-11cc-45a8-95a0-336d2ab92e47"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:25:04 crc kubenswrapper[4990]: I1205 01:25:04.809992 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m489m\" (UniqueName: \"kubernetes.io/projected/fcfad2d2-11cc-45a8-95a0-336d2ab92e47-kube-api-access-m489m\") on node \"crc\" DevicePath \"\"" Dec 05 01:25:04 crc kubenswrapper[4990]: I1205 01:25:04.810033 4990 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcfad2d2-11cc-45a8-95a0-336d2ab92e47-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 01:25:04 crc kubenswrapper[4990]: I1205 01:25:04.810048 4990 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcfad2d2-11cc-45a8-95a0-336d2ab92e47-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 01:25:04 crc kubenswrapper[4990]: I1205 01:25:04.927180 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lzddt"] Dec 05 01:25:04 crc kubenswrapper[4990]: I1205 01:25:04.932824 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lzddt"] Dec 05 01:25:05 crc kubenswrapper[4990]: I1205 01:25:05.606371 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8hb6d" Dec 05 01:25:05 crc kubenswrapper[4990]: I1205 01:25:05.606434 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8hb6d" Dec 05 01:25:05 crc kubenswrapper[4990]: I1205 01:25:05.675984 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8hb6d" Dec 05 01:25:05 crc kubenswrapper[4990]: I1205 01:25:05.942131 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcfad2d2-11cc-45a8-95a0-336d2ab92e47" path="/var/lib/kubelet/pods/fcfad2d2-11cc-45a8-95a0-336d2ab92e47/volumes" Dec 05 01:25:06 crc kubenswrapper[4990]: I1205 01:25:06.652281 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8hb6d" Dec 05 01:25:08 crc kubenswrapper[4990]: I1205 01:25:08.070673 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-jbx5x"] Dec 05 01:25:08 crc kubenswrapper[4990]: E1205 01:25:08.071169 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcfad2d2-11cc-45a8-95a0-336d2ab92e47" containerName="extract-content" Dec 05 01:25:08 crc kubenswrapper[4990]: I1205 01:25:08.071185 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcfad2d2-11cc-45a8-95a0-336d2ab92e47" containerName="extract-content" Dec 05 01:25:08 crc kubenswrapper[4990]: E1205 01:25:08.071205 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc34209-d840-4863-b29e-98d64972e9c7" containerName="util" Dec 05 01:25:08 crc kubenswrapper[4990]: I1205 01:25:08.071213 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc34209-d840-4863-b29e-98d64972e9c7" containerName="util" Dec 05 01:25:08 crc kubenswrapper[4990]: E1205 01:25:08.071227 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcfad2d2-11cc-45a8-95a0-336d2ab92e47" containerName="extract-utilities" Dec 05 01:25:08 crc kubenswrapper[4990]: I1205 01:25:08.071235 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcfad2d2-11cc-45a8-95a0-336d2ab92e47" containerName="extract-utilities" Dec 05 01:25:08 crc kubenswrapper[4990]: E1205 01:25:08.071251 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc34209-d840-4863-b29e-98d64972e9c7" containerName="extract" Dec 05 01:25:08 crc kubenswrapper[4990]: I1205 01:25:08.071258 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc34209-d840-4863-b29e-98d64972e9c7" containerName="extract" Dec 05 01:25:08 crc kubenswrapper[4990]: E1205 01:25:08.071273 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcfad2d2-11cc-45a8-95a0-336d2ab92e47" containerName="registry-server" Dec 05 01:25:08 crc kubenswrapper[4990]: I1205 01:25:08.071281 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcfad2d2-11cc-45a8-95a0-336d2ab92e47" containerName="registry-server" Dec 05 01:25:08 crc kubenswrapper[4990]: E1205 01:25:08.071290 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc34209-d840-4863-b29e-98d64972e9c7" containerName="pull" Dec 05 01:25:08 crc kubenswrapper[4990]: I1205 01:25:08.071299 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc34209-d840-4863-b29e-98d64972e9c7" containerName="pull" Dec 05 01:25:08 crc kubenswrapper[4990]: I1205 01:25:08.071413 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcfad2d2-11cc-45a8-95a0-336d2ab92e47" containerName="registry-server" Dec 05 01:25:08 crc kubenswrapper[4990]: I1205 01:25:08.071430 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cc34209-d840-4863-b29e-98d64972e9c7" containerName="extract" Dec 05 01:25:08 crc kubenswrapper[4990]: I1205 01:25:08.071857 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-jbx5x" Dec 05 01:25:08 crc kubenswrapper[4990]: I1205 01:25:08.074565 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 05 01:25:08 crc kubenswrapper[4990]: I1205 01:25:08.075342 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 05 01:25:08 crc kubenswrapper[4990]: I1205 01:25:08.078749 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-27j56" Dec 05 01:25:08 crc kubenswrapper[4990]: I1205 01:25:08.080098 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-jbx5x"] Dec 05 01:25:08 crc kubenswrapper[4990]: I1205 01:25:08.257956 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljxfc\" (UniqueName: \"kubernetes.io/projected/0e74c706-ea18-4a4f-8056-bba53a53edf9-kube-api-access-ljxfc\") pod \"nmstate-operator-5b5b58f5c8-jbx5x\" (UID: \"0e74c706-ea18-4a4f-8056-bba53a53edf9\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-jbx5x" Dec 05 01:25:08 crc kubenswrapper[4990]: I1205 01:25:08.359561 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljxfc\" (UniqueName: \"kubernetes.io/projected/0e74c706-ea18-4a4f-8056-bba53a53edf9-kube-api-access-ljxfc\") pod \"nmstate-operator-5b5b58f5c8-jbx5x\" (UID: \"0e74c706-ea18-4a4f-8056-bba53a53edf9\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-jbx5x" Dec 05 01:25:08 crc kubenswrapper[4990]: I1205 01:25:08.393741 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljxfc\" (UniqueName: \"kubernetes.io/projected/0e74c706-ea18-4a4f-8056-bba53a53edf9-kube-api-access-ljxfc\") pod \"nmstate-operator-5b5b58f5c8-jbx5x\" (UID: \"0e74c706-ea18-4a4f-8056-bba53a53edf9\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-jbx5x" Dec 05 01:25:08 crc kubenswrapper[4990]: I1205 01:25:08.689125 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-jbx5x" Dec 05 01:25:08 crc kubenswrapper[4990]: I1205 01:25:08.960797 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-jbx5x"] Dec 05 01:25:08 crc kubenswrapper[4990]: W1205 01:25:08.967630 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e74c706_ea18_4a4f_8056_bba53a53edf9.slice/crio-00d923989a5bdbc9ebf65e260031f328b71090f1b461ede1d468940e6673461b WatchSource:0}: Error finding container 00d923989a5bdbc9ebf65e260031f328b71090f1b461ede1d468940e6673461b: Status 404 returned error can't find the container with id 00d923989a5bdbc9ebf65e260031f328b71090f1b461ede1d468940e6673461b Dec 05 01:25:09 crc kubenswrapper[4990]: I1205 01:25:09.615082 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-jbx5x" event={"ID":"0e74c706-ea18-4a4f-8056-bba53a53edf9","Type":"ContainerStarted","Data":"00d923989a5bdbc9ebf65e260031f328b71090f1b461ede1d468940e6673461b"} Dec 05 01:25:10 crc kubenswrapper[4990]: I1205 01:25:10.041252 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8hb6d"] Dec 05 01:25:10 crc kubenswrapper[4990]: I1205 01:25:10.041639 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8hb6d" podUID="83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8" containerName="registry-server" containerID="cri-o://4ef62aa20ed6342098683a94891833498754502acd5e8afc95e69bb71e94ad09" gracePeriod=2 Dec 05 01:25:10 crc kubenswrapper[4990]: I1205 01:25:10.625241 4990 generic.go:334] "Generic (PLEG): container finished" podID="83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8" containerID="4ef62aa20ed6342098683a94891833498754502acd5e8afc95e69bb71e94ad09" exitCode=0 Dec 05 01:25:10 crc kubenswrapper[4990]: I1205 01:25:10.625375 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8hb6d" event={"ID":"83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8","Type":"ContainerDied","Data":"4ef62aa20ed6342098683a94891833498754502acd5e8afc95e69bb71e94ad09"} Dec 05 01:25:10 crc kubenswrapper[4990]: I1205 01:25:10.929461 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8hb6d" Dec 05 01:25:11 crc kubenswrapper[4990]: I1205 01:25:11.095137 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8-catalog-content\") pod \"83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8\" (UID: \"83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8\") " Dec 05 01:25:11 crc kubenswrapper[4990]: I1205 01:25:11.095244 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8-utilities\") pod \"83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8\" (UID: \"83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8\") " Dec 05 01:25:11 crc kubenswrapper[4990]: I1205 01:25:11.095428 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7h4l\" (UniqueName: \"kubernetes.io/projected/83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8-kube-api-access-l7h4l\") pod \"83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8\" (UID: \"83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8\") " Dec 05 01:25:11 crc kubenswrapper[4990]: I1205 01:25:11.096002 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8-utilities" (OuterVolumeSpecName: "utilities") pod "83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8" (UID: "83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:25:11 crc kubenswrapper[4990]: I1205 01:25:11.104194 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8-kube-api-access-l7h4l" (OuterVolumeSpecName: "kube-api-access-l7h4l") pod "83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8" (UID: "83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8"). InnerVolumeSpecName "kube-api-access-l7h4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:25:11 crc kubenswrapper[4990]: I1205 01:25:11.153752 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8" (UID: "83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:25:11 crc kubenswrapper[4990]: I1205 01:25:11.196841 4990 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 01:25:11 crc kubenswrapper[4990]: I1205 01:25:11.196886 4990 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 01:25:11 crc kubenswrapper[4990]: I1205 01:25:11.196905 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7h4l\" (UniqueName: \"kubernetes.io/projected/83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8-kube-api-access-l7h4l\") on node \"crc\" DevicePath \"\"" Dec 05 01:25:11 crc kubenswrapper[4990]: I1205 01:25:11.636085 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8hb6d" event={"ID":"83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8","Type":"ContainerDied","Data":"04ca725711f79a39b1f94cde1ee026f7e28da9fd7702815577be74ee21efc25c"} Dec 05 01:25:11 crc kubenswrapper[4990]: I1205 01:25:11.636158 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8hb6d" Dec 05 01:25:11 crc kubenswrapper[4990]: I1205 01:25:11.636166 4990 scope.go:117] "RemoveContainer" containerID="4ef62aa20ed6342098683a94891833498754502acd5e8afc95e69bb71e94ad09" Dec 05 01:25:11 crc kubenswrapper[4990]: I1205 01:25:11.657253 4990 scope.go:117] "RemoveContainer" containerID="f55bb0dcb285a8409d053a367aa419e041ad453b23901f60a1590b6b0eb66aae" Dec 05 01:25:11 crc kubenswrapper[4990]: I1205 01:25:11.677330 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8hb6d"] Dec 05 01:25:11 crc kubenswrapper[4990]: I1205 01:25:11.680360 4990 scope.go:117] "RemoveContainer" containerID="d7fe366275527297b61897bdf7b1c8c8d4140e8a83e4ec2170d44a495105a1f3" Dec 05 01:25:11 crc kubenswrapper[4990]: I1205 01:25:11.683888 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8hb6d"] Dec 05 01:25:11 crc kubenswrapper[4990]: I1205 01:25:11.939467 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8" path="/var/lib/kubelet/pods/83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8/volumes" Dec 05 01:25:17 crc kubenswrapper[4990]: I1205 01:25:17.684883 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-jbx5x" event={"ID":"0e74c706-ea18-4a4f-8056-bba53a53edf9","Type":"ContainerStarted","Data":"8b08cb0ea1fcc52d1ad249021cfd7283cb092432746ed5495d848e93562966a3"} Dec 05 01:25:17 crc kubenswrapper[4990]: I1205 01:25:17.717946 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-jbx5x" podStartSLOduration=1.32939196 podStartE2EDuration="9.717915733s" podCreationTimestamp="2025-12-05 01:25:08 +0000 UTC" firstStartedPulling="2025-12-05 01:25:08.97189488 +0000 UTC m=+1007.348110241" lastFinishedPulling="2025-12-05 01:25:17.360418643 +0000 UTC m=+1015.736634014" observedRunningTime="2025-12-05 01:25:17.710471282 +0000 UTC m=+1016.086686673" watchObservedRunningTime="2025-12-05 01:25:17.717915733 +0000 UTC m=+1016.094131124" Dec 05 01:25:18 crc kubenswrapper[4990]: I1205 01:25:18.759394 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-wckwv"] Dec 05 01:25:18 crc kubenswrapper[4990]: E1205 01:25:18.759641 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8" containerName="extract-utilities" Dec 05 01:25:18 crc kubenswrapper[4990]: I1205 01:25:18.759656 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8" containerName="extract-utilities" Dec 05 01:25:18 crc kubenswrapper[4990]: E1205 01:25:18.759679 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8" containerName="extract-content" Dec 05 01:25:18 crc kubenswrapper[4990]: I1205 01:25:18.759687 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8" containerName="extract-content" Dec 05 01:25:18 crc kubenswrapper[4990]: E1205 01:25:18.759706 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8" containerName="registry-server" Dec 05 01:25:18 crc kubenswrapper[4990]: I1205 01:25:18.759714 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8" containerName="registry-server" Dec 05 01:25:18 crc kubenswrapper[4990]: I1205 01:25:18.759849 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="83e0e10a-c7cf-4c11-a6a7-90d69a5f74d8" containerName="registry-server" Dec 05 01:25:18 crc kubenswrapper[4990]: I1205 01:25:18.760509 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-wckwv" Dec 05 01:25:18 crc kubenswrapper[4990]: I1205 01:25:18.763083 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-29wxx" Dec 05 01:25:18 crc kubenswrapper[4990]: I1205 01:25:18.778203 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-w7vq9"] Dec 05 01:25:18 crc kubenswrapper[4990]: I1205 01:25:18.786883 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-w7vq9" Dec 05 01:25:18 crc kubenswrapper[4990]: I1205 01:25:18.792761 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 05 01:25:18 crc kubenswrapper[4990]: I1205 01:25:18.822361 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpqtt\" (UniqueName: \"kubernetes.io/projected/a3077203-24e9-4351-8ba8-5bcaa5942894-kube-api-access-zpqtt\") pod \"nmstate-webhook-5f6d4c5ccb-w7vq9\" (UID: \"a3077203-24e9-4351-8ba8-5bcaa5942894\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-w7vq9" Dec 05 01:25:18 crc kubenswrapper[4990]: I1205 01:25:18.822446 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a3077203-24e9-4351-8ba8-5bcaa5942894-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-w7vq9\" (UID: \"a3077203-24e9-4351-8ba8-5bcaa5942894\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-w7vq9" Dec 05 01:25:18 crc kubenswrapper[4990]: I1205 01:25:18.822470 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm99w\" (UniqueName: \"kubernetes.io/projected/5ee24fe7-1614-4d3b-8501-c7c1cdf4449f-kube-api-access-sm99w\") pod \"nmstate-metrics-7f946cbc9-wckwv\" (UID: \"5ee24fe7-1614-4d3b-8501-c7c1cdf4449f\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-wckwv" Dec 05 01:25:18 crc kubenswrapper[4990]: I1205 01:25:18.824811 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-w7vq9"] Dec 05 01:25:18 crc kubenswrapper[4990]: I1205 01:25:18.835740 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-mgpb7"] Dec 05 01:25:18 crc kubenswrapper[4990]: I1205 01:25:18.836801 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-mgpb7" Dec 05 01:25:18 crc kubenswrapper[4990]: I1205 01:25:18.854631 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-wckwv"] Dec 05 01:25:18 crc kubenswrapper[4990]: I1205 01:25:18.912870 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-f48rv"] Dec 05 01:25:18 crc kubenswrapper[4990]: I1205 01:25:18.913629 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-f48rv" Dec 05 01:25:18 crc kubenswrapper[4990]: I1205 01:25:18.915325 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-thwdx" Dec 05 01:25:18 crc kubenswrapper[4990]: I1205 01:25:18.915389 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 05 01:25:18 crc kubenswrapper[4990]: I1205 01:25:18.919055 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 05 01:25:18 crc kubenswrapper[4990]: I1205 01:25:18.922663 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-f48rv"] Dec 05 01:25:18 crc kubenswrapper[4990]: I1205 01:25:18.923271 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/4b233ec7-934e-4cef-a18b-8b8c9f36e23e-ovs-socket\") pod \"nmstate-handler-mgpb7\" (UID: \"4b233ec7-934e-4cef-a18b-8b8c9f36e23e\") " pod="openshift-nmstate/nmstate-handler-mgpb7" Dec 05 01:25:18 crc kubenswrapper[4990]: I1205 01:25:18.923367 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/4b233ec7-934e-4cef-a18b-8b8c9f36e23e-dbus-socket\") pod \"nmstate-handler-mgpb7\" (UID: \"4b233ec7-934e-4cef-a18b-8b8c9f36e23e\") " pod="openshift-nmstate/nmstate-handler-mgpb7" Dec 05 01:25:18 crc kubenswrapper[4990]: I1205 01:25:18.923503 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9htqf\" (UniqueName: \"kubernetes.io/projected/4b233ec7-934e-4cef-a18b-8b8c9f36e23e-kube-api-access-9htqf\") pod \"nmstate-handler-mgpb7\" (UID: \"4b233ec7-934e-4cef-a18b-8b8c9f36e23e\") " pod="openshift-nmstate/nmstate-handler-mgpb7" Dec 05 01:25:18 crc kubenswrapper[4990]: I1205 01:25:18.923620 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/4b233ec7-934e-4cef-a18b-8b8c9f36e23e-nmstate-lock\") pod \"nmstate-handler-mgpb7\" (UID: \"4b233ec7-934e-4cef-a18b-8b8c9f36e23e\") " pod="openshift-nmstate/nmstate-handler-mgpb7" Dec 05 01:25:18 crc kubenswrapper[4990]: I1205 01:25:18.923840 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a3077203-24e9-4351-8ba8-5bcaa5942894-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-w7vq9\" (UID: \"a3077203-24e9-4351-8ba8-5bcaa5942894\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-w7vq9" Dec 05 01:25:18 crc kubenswrapper[4990]: I1205 01:25:18.923929 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm99w\" (UniqueName: \"kubernetes.io/projected/5ee24fe7-1614-4d3b-8501-c7c1cdf4449f-kube-api-access-sm99w\") pod \"nmstate-metrics-7f946cbc9-wckwv\" (UID: \"5ee24fe7-1614-4d3b-8501-c7c1cdf4449f\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-wckwv" Dec 05 01:25:18 crc kubenswrapper[4990]: I1205 01:25:18.924028 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpqtt\" (UniqueName: \"kubernetes.io/projected/a3077203-24e9-4351-8ba8-5bcaa5942894-kube-api-access-zpqtt\") pod \"nmstate-webhook-5f6d4c5ccb-w7vq9\" (UID: \"a3077203-24e9-4351-8ba8-5bcaa5942894\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-w7vq9" Dec 05 01:25:18 crc kubenswrapper[4990]: I1205 01:25:18.931005 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a3077203-24e9-4351-8ba8-5bcaa5942894-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-w7vq9\" (UID: \"a3077203-24e9-4351-8ba8-5bcaa5942894\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-w7vq9" Dec 05 01:25:18 crc kubenswrapper[4990]: I1205 01:25:18.941676 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm99w\" (UniqueName: \"kubernetes.io/projected/5ee24fe7-1614-4d3b-8501-c7c1cdf4449f-kube-api-access-sm99w\") pod \"nmstate-metrics-7f946cbc9-wckwv\" (UID: \"5ee24fe7-1614-4d3b-8501-c7c1cdf4449f\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-wckwv" Dec 05 01:25:18 crc kubenswrapper[4990]: I1205 01:25:18.944072 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpqtt\" (UniqueName: \"kubernetes.io/projected/a3077203-24e9-4351-8ba8-5bcaa5942894-kube-api-access-zpqtt\") pod \"nmstate-webhook-5f6d4c5ccb-w7vq9\" (UID: \"a3077203-24e9-4351-8ba8-5bcaa5942894\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-w7vq9" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.025639 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2ddd952b-ae34-497f-b7d0-e428cb8eb66a-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-f48rv\" (UID: \"2ddd952b-ae34-497f-b7d0-e428cb8eb66a\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-f48rv" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.025916 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9htqf\" (UniqueName: \"kubernetes.io/projected/4b233ec7-934e-4cef-a18b-8b8c9f36e23e-kube-api-access-9htqf\") pod \"nmstate-handler-mgpb7\" (UID: \"4b233ec7-934e-4cef-a18b-8b8c9f36e23e\") " pod="openshift-nmstate/nmstate-handler-mgpb7" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.026002 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/4b233ec7-934e-4cef-a18b-8b8c9f36e23e-nmstate-lock\") pod \"nmstate-handler-mgpb7\" (UID: \"4b233ec7-934e-4cef-a18b-8b8c9f36e23e\") " pod="openshift-nmstate/nmstate-handler-mgpb7" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.026074 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ddd952b-ae34-497f-b7d0-e428cb8eb66a-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-f48rv\" (UID: \"2ddd952b-ae34-497f-b7d0-e428cb8eb66a\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-f48rv" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.026162 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltwrp\" (UniqueName: \"kubernetes.io/projected/2ddd952b-ae34-497f-b7d0-e428cb8eb66a-kube-api-access-ltwrp\") pod \"nmstate-console-plugin-7fbb5f6569-f48rv\" (UID: \"2ddd952b-ae34-497f-b7d0-e428cb8eb66a\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-f48rv" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.026255 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/4b233ec7-934e-4cef-a18b-8b8c9f36e23e-ovs-socket\") pod \"nmstate-handler-mgpb7\" (UID: \"4b233ec7-934e-4cef-a18b-8b8c9f36e23e\") " pod="openshift-nmstate/nmstate-handler-mgpb7" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.026334 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/4b233ec7-934e-4cef-a18b-8b8c9f36e23e-dbus-socket\") pod \"nmstate-handler-mgpb7\" (UID: \"4b233ec7-934e-4cef-a18b-8b8c9f36e23e\") " pod="openshift-nmstate/nmstate-handler-mgpb7" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.026407 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/4b233ec7-934e-4cef-a18b-8b8c9f36e23e-nmstate-lock\") pod \"nmstate-handler-mgpb7\" (UID: \"4b233ec7-934e-4cef-a18b-8b8c9f36e23e\") " pod="openshift-nmstate/nmstate-handler-mgpb7" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.026513 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/4b233ec7-934e-4cef-a18b-8b8c9f36e23e-ovs-socket\") pod \"nmstate-handler-mgpb7\" (UID: \"4b233ec7-934e-4cef-a18b-8b8c9f36e23e\") " pod="openshift-nmstate/nmstate-handler-mgpb7" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.026941 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/4b233ec7-934e-4cef-a18b-8b8c9f36e23e-dbus-socket\") pod \"nmstate-handler-mgpb7\" (UID: \"4b233ec7-934e-4cef-a18b-8b8c9f36e23e\") " pod="openshift-nmstate/nmstate-handler-mgpb7" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.046166 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9htqf\" (UniqueName: \"kubernetes.io/projected/4b233ec7-934e-4cef-a18b-8b8c9f36e23e-kube-api-access-9htqf\") pod \"nmstate-handler-mgpb7\" (UID: \"4b233ec7-934e-4cef-a18b-8b8c9f36e23e\") " pod="openshift-nmstate/nmstate-handler-mgpb7" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.092327 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5dfc497f9-z4q58"] Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.092594 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-wckwv" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.093057 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5dfc497f9-z4q58" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.110132 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5dfc497f9-z4q58"] Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.127192 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f087370c-7e9a-4155-a3da-845a92313ad3-console-oauth-config\") pod \"console-5dfc497f9-z4q58\" (UID: \"f087370c-7e9a-4155-a3da-845a92313ad3\") " pod="openshift-console/console-5dfc497f9-z4q58" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.127242 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f087370c-7e9a-4155-a3da-845a92313ad3-trusted-ca-bundle\") pod \"console-5dfc497f9-z4q58\" (UID: \"f087370c-7e9a-4155-a3da-845a92313ad3\") " pod="openshift-console/console-5dfc497f9-z4q58" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.127278 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2ddd952b-ae34-497f-b7d0-e428cb8eb66a-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-f48rv\" (UID: \"2ddd952b-ae34-497f-b7d0-e428cb8eb66a\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-f48rv" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.127308 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f087370c-7e9a-4155-a3da-845a92313ad3-console-serving-cert\") pod \"console-5dfc497f9-z4q58\" (UID: \"f087370c-7e9a-4155-a3da-845a92313ad3\") " pod="openshift-console/console-5dfc497f9-z4q58" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.127343 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ddd952b-ae34-497f-b7d0-e428cb8eb66a-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-f48rv\" (UID: \"2ddd952b-ae34-497f-b7d0-e428cb8eb66a\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-f48rv" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.127364 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f087370c-7e9a-4155-a3da-845a92313ad3-service-ca\") pod \"console-5dfc497f9-z4q58\" (UID: \"f087370c-7e9a-4155-a3da-845a92313ad3\") " pod="openshift-console/console-5dfc497f9-z4q58" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.127404 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltwrp\" (UniqueName: \"kubernetes.io/projected/2ddd952b-ae34-497f-b7d0-e428cb8eb66a-kube-api-access-ltwrp\") pod \"nmstate-console-plugin-7fbb5f6569-f48rv\" (UID: \"2ddd952b-ae34-497f-b7d0-e428cb8eb66a\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-f48rv" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.127446 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc49q\" (UniqueName: \"kubernetes.io/projected/f087370c-7e9a-4155-a3da-845a92313ad3-kube-api-access-mc49q\") pod \"console-5dfc497f9-z4q58\" (UID: \"f087370c-7e9a-4155-a3da-845a92313ad3\") " pod="openshift-console/console-5dfc497f9-z4q58" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.127497 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f087370c-7e9a-4155-a3da-845a92313ad3-console-config\") pod \"console-5dfc497f9-z4q58\" (UID: \"f087370c-7e9a-4155-a3da-845a92313ad3\") " pod="openshift-console/console-5dfc497f9-z4q58" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.127527 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f087370c-7e9a-4155-a3da-845a92313ad3-oauth-serving-cert\") pod \"console-5dfc497f9-z4q58\" (UID: \"f087370c-7e9a-4155-a3da-845a92313ad3\") " pod="openshift-console/console-5dfc497f9-z4q58" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.128517 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2ddd952b-ae34-497f-b7d0-e428cb8eb66a-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-f48rv\" (UID: \"2ddd952b-ae34-497f-b7d0-e428cb8eb66a\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-f48rv" Dec 05 01:25:19 crc kubenswrapper[4990]: E1205 01:25:19.128613 4990 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Dec 05 01:25:19 crc kubenswrapper[4990]: E1205 01:25:19.128663 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ddd952b-ae34-497f-b7d0-e428cb8eb66a-plugin-serving-cert podName:2ddd952b-ae34-497f-b7d0-e428cb8eb66a nodeName:}" failed. No retries permitted until 2025-12-05 01:25:19.628644901 +0000 UTC m=+1018.004860282 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/2ddd952b-ae34-497f-b7d0-e428cb8eb66a-plugin-serving-cert") pod "nmstate-console-plugin-7fbb5f6569-f48rv" (UID: "2ddd952b-ae34-497f-b7d0-e428cb8eb66a") : secret "plugin-serving-cert" not found Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.142163 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-w7vq9" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.153534 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-mgpb7" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.162337 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltwrp\" (UniqueName: \"kubernetes.io/projected/2ddd952b-ae34-497f-b7d0-e428cb8eb66a-kube-api-access-ltwrp\") pod \"nmstate-console-plugin-7fbb5f6569-f48rv\" (UID: \"2ddd952b-ae34-497f-b7d0-e428cb8eb66a\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-f48rv" Dec 05 01:25:19 crc kubenswrapper[4990]: W1205 01:25:19.174146 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b233ec7_934e_4cef_a18b_8b8c9f36e23e.slice/crio-9d5084bae58c1f753118d90c95a62304eec9bf1dc94ff97e920763bf08ed30ec WatchSource:0}: Error finding container 9d5084bae58c1f753118d90c95a62304eec9bf1dc94ff97e920763bf08ed30ec: Status 404 returned error can't find the container with id 9d5084bae58c1f753118d90c95a62304eec9bf1dc94ff97e920763bf08ed30ec Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.232049 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc49q\" (UniqueName: \"kubernetes.io/projected/f087370c-7e9a-4155-a3da-845a92313ad3-kube-api-access-mc49q\") pod \"console-5dfc497f9-z4q58\" (UID: \"f087370c-7e9a-4155-a3da-845a92313ad3\") " pod="openshift-console/console-5dfc497f9-z4q58" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.232109 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f087370c-7e9a-4155-a3da-845a92313ad3-console-config\") pod \"console-5dfc497f9-z4q58\" (UID: \"f087370c-7e9a-4155-a3da-845a92313ad3\") " pod="openshift-console/console-5dfc497f9-z4q58" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.232132 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f087370c-7e9a-4155-a3da-845a92313ad3-oauth-serving-cert\") pod \"console-5dfc497f9-z4q58\" (UID: \"f087370c-7e9a-4155-a3da-845a92313ad3\") " pod="openshift-console/console-5dfc497f9-z4q58" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.232151 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f087370c-7e9a-4155-a3da-845a92313ad3-console-oauth-config\") pod \"console-5dfc497f9-z4q58\" (UID: \"f087370c-7e9a-4155-a3da-845a92313ad3\") " pod="openshift-console/console-5dfc497f9-z4q58" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.232165 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f087370c-7e9a-4155-a3da-845a92313ad3-trusted-ca-bundle\") pod \"console-5dfc497f9-z4q58\" (UID: \"f087370c-7e9a-4155-a3da-845a92313ad3\") " pod="openshift-console/console-5dfc497f9-z4q58" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.232192 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f087370c-7e9a-4155-a3da-845a92313ad3-console-serving-cert\") pod \"console-5dfc497f9-z4q58\" (UID: \"f087370c-7e9a-4155-a3da-845a92313ad3\") " pod="openshift-console/console-5dfc497f9-z4q58" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.232216 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f087370c-7e9a-4155-a3da-845a92313ad3-service-ca\") pod \"console-5dfc497f9-z4q58\" (UID: \"f087370c-7e9a-4155-a3da-845a92313ad3\") " pod="openshift-console/console-5dfc497f9-z4q58" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.233813 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f087370c-7e9a-4155-a3da-845a92313ad3-console-config\") pod \"console-5dfc497f9-z4q58\" (UID: \"f087370c-7e9a-4155-a3da-845a92313ad3\") " pod="openshift-console/console-5dfc497f9-z4q58" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.234294 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f087370c-7e9a-4155-a3da-845a92313ad3-oauth-serving-cert\") pod \"console-5dfc497f9-z4q58\" (UID: \"f087370c-7e9a-4155-a3da-845a92313ad3\") " pod="openshift-console/console-5dfc497f9-z4q58" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.237530 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f087370c-7e9a-4155-a3da-845a92313ad3-console-oauth-config\") pod \"console-5dfc497f9-z4q58\" (UID: \"f087370c-7e9a-4155-a3da-845a92313ad3\") " pod="openshift-console/console-5dfc497f9-z4q58" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.238069 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f087370c-7e9a-4155-a3da-845a92313ad3-service-ca\") pod \"console-5dfc497f9-z4q58\" (UID: \"f087370c-7e9a-4155-a3da-845a92313ad3\") " pod="openshift-console/console-5dfc497f9-z4q58" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.238755 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f087370c-7e9a-4155-a3da-845a92313ad3-trusted-ca-bundle\") pod \"console-5dfc497f9-z4q58\" (UID: \"f087370c-7e9a-4155-a3da-845a92313ad3\") " pod="openshift-console/console-5dfc497f9-z4q58" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.242555 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f087370c-7e9a-4155-a3da-845a92313ad3-console-serving-cert\") pod \"console-5dfc497f9-z4q58\" (UID: \"f087370c-7e9a-4155-a3da-845a92313ad3\") " pod="openshift-console/console-5dfc497f9-z4q58" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.255930 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc49q\" (UniqueName: \"kubernetes.io/projected/f087370c-7e9a-4155-a3da-845a92313ad3-kube-api-access-mc49q\") pod \"console-5dfc497f9-z4q58\" (UID: \"f087370c-7e9a-4155-a3da-845a92313ad3\") " pod="openshift-console/console-5dfc497f9-z4q58" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.313233 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-wckwv"] Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.345846 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-w7vq9"] Dec 05 01:25:19 crc kubenswrapper[4990]: W1205 01:25:19.349330 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3077203_24e9_4351_8ba8_5bcaa5942894.slice/crio-e7014c7d897d2a6bea8d3005e1cd1974f6be363ab5095e00895d367ab4ee3ee9 WatchSource:0}: Error finding container e7014c7d897d2a6bea8d3005e1cd1974f6be363ab5095e00895d367ab4ee3ee9: Status 404 returned error can't find the container with id e7014c7d897d2a6bea8d3005e1cd1974f6be363ab5095e00895d367ab4ee3ee9 Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.415581 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5dfc497f9-z4q58" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.637245 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ddd952b-ae34-497f-b7d0-e428cb8eb66a-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-f48rv\" (UID: \"2ddd952b-ae34-497f-b7d0-e428cb8eb66a\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-f48rv" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.642498 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ddd952b-ae34-497f-b7d0-e428cb8eb66a-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-f48rv\" (UID: \"2ddd952b-ae34-497f-b7d0-e428cb8eb66a\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-f48rv" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.702266 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-wckwv" event={"ID":"5ee24fe7-1614-4d3b-8501-c7c1cdf4449f","Type":"ContainerStarted","Data":"8bb9ea876e8ff666c207e18b2d3f303edbf7f373f25e679ebf6dbe80bbb23bd5"} Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.704459 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-mgpb7" event={"ID":"4b233ec7-934e-4cef-a18b-8b8c9f36e23e","Type":"ContainerStarted","Data":"9d5084bae58c1f753118d90c95a62304eec9bf1dc94ff97e920763bf08ed30ec"} Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.706141 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-w7vq9" event={"ID":"a3077203-24e9-4351-8ba8-5bcaa5942894","Type":"ContainerStarted","Data":"e7014c7d897d2a6bea8d3005e1cd1974f6be363ab5095e00895d367ab4ee3ee9"} Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.829016 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-f48rv" Dec 05 01:25:19 crc kubenswrapper[4990]: I1205 01:25:19.850993 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5dfc497f9-z4q58"] Dec 05 01:25:19 crc kubenswrapper[4990]: W1205 01:25:19.856588 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf087370c_7e9a_4155_a3da_845a92313ad3.slice/crio-7524ec0eb4041fe2ea9a9e9c8a8845e84c828e81bf0a6413454861d30aad1fce WatchSource:0}: Error finding container 7524ec0eb4041fe2ea9a9e9c8a8845e84c828e81bf0a6413454861d30aad1fce: Status 404 returned error can't find the container with id 7524ec0eb4041fe2ea9a9e9c8a8845e84c828e81bf0a6413454861d30aad1fce Dec 05 01:25:20 crc kubenswrapper[4990]: I1205 01:25:20.094209 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-f48rv"] Dec 05 01:25:20 crc kubenswrapper[4990]: W1205 01:25:20.099209 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ddd952b_ae34_497f_b7d0_e428cb8eb66a.slice/crio-e551c51156f63d78f37c592c844c952c9b5a6be66179e8de65d3f0e070814204 WatchSource:0}: Error finding container e551c51156f63d78f37c592c844c952c9b5a6be66179e8de65d3f0e070814204: Status 404 returned error can't find the container with id e551c51156f63d78f37c592c844c952c9b5a6be66179e8de65d3f0e070814204 Dec 05 01:25:20 crc kubenswrapper[4990]: I1205 01:25:20.713658 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5dfc497f9-z4q58" event={"ID":"f087370c-7e9a-4155-a3da-845a92313ad3","Type":"ContainerStarted","Data":"f07e401bc486a8839597816cd8a2d0ad42b66fe990695ceae23be5894fa281e1"} Dec 05 01:25:20 crc kubenswrapper[4990]: I1205 01:25:20.713706 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5dfc497f9-z4q58" event={"ID":"f087370c-7e9a-4155-a3da-845a92313ad3","Type":"ContainerStarted","Data":"7524ec0eb4041fe2ea9a9e9c8a8845e84c828e81bf0a6413454861d30aad1fce"} Dec 05 01:25:20 crc kubenswrapper[4990]: I1205 01:25:20.715548 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-f48rv" event={"ID":"2ddd952b-ae34-497f-b7d0-e428cb8eb66a","Type":"ContainerStarted","Data":"e551c51156f63d78f37c592c844c952c9b5a6be66179e8de65d3f0e070814204"} Dec 05 01:25:20 crc kubenswrapper[4990]: I1205 01:25:20.737458 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5dfc497f9-z4q58" podStartSLOduration=1.737431837 podStartE2EDuration="1.737431837s" podCreationTimestamp="2025-12-05 01:25:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:25:20.734065832 +0000 UTC m=+1019.110281283" watchObservedRunningTime="2025-12-05 01:25:20.737431837 +0000 UTC m=+1019.113647208" Dec 05 01:25:21 crc kubenswrapper[4990]: I1205 01:25:21.724646 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-w7vq9" event={"ID":"a3077203-24e9-4351-8ba8-5bcaa5942894","Type":"ContainerStarted","Data":"790539306b81c19fac291635d5ce71e3b9c99162bd00d212702b8a2970f9c375"} Dec 05 01:25:21 crc kubenswrapper[4990]: I1205 01:25:21.725605 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-w7vq9" Dec 05 01:25:21 crc kubenswrapper[4990]: I1205 01:25:21.728199 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-wckwv" event={"ID":"5ee24fe7-1614-4d3b-8501-c7c1cdf4449f","Type":"ContainerStarted","Data":"c36eb8a651b54687bcf94a2b6c0b8800fd5251db96f4f729bc543899940ba379"} Dec 05 01:25:21 crc kubenswrapper[4990]: I1205 01:25:21.730111 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-mgpb7" event={"ID":"4b233ec7-934e-4cef-a18b-8b8c9f36e23e","Type":"ContainerStarted","Data":"31a22854b98ab35901cf29782de64748cc1b6f41a3ea53559d446dc7ec83cd7b"} Dec 05 01:25:21 crc kubenswrapper[4990]: I1205 01:25:21.736164 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-mgpb7" Dec 05 01:25:21 crc kubenswrapper[4990]: I1205 01:25:21.745614 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-w7vq9" podStartSLOduration=1.864423193 podStartE2EDuration="3.745597515s" podCreationTimestamp="2025-12-05 01:25:18 +0000 UTC" firstStartedPulling="2025-12-05 01:25:19.351803141 +0000 UTC m=+1017.728018502" lastFinishedPulling="2025-12-05 01:25:21.232977463 +0000 UTC m=+1019.609192824" observedRunningTime="2025-12-05 01:25:21.743783503 +0000 UTC m=+1020.119998874" watchObservedRunningTime="2025-12-05 01:25:21.745597515 +0000 UTC m=+1020.121812886" Dec 05 01:25:21 crc kubenswrapper[4990]: I1205 01:25:21.768944 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-mgpb7" podStartSLOduration=1.663957496 podStartE2EDuration="3.768917386s" podCreationTimestamp="2025-12-05 01:25:18 +0000 UTC" firstStartedPulling="2025-12-05 01:25:19.176424457 +0000 UTC m=+1017.552639808" lastFinishedPulling="2025-12-05 01:25:21.281384337 +0000 UTC m=+1019.657599698" observedRunningTime="2025-12-05 01:25:21.764869601 +0000 UTC m=+1020.141084972" watchObservedRunningTime="2025-12-05 01:25:21.768917386 +0000 UTC m=+1020.145132747" Dec 05 01:25:21 crc kubenswrapper[4990]: I1205 01:25:21.823404 4990 patch_prober.go:28] interesting pod/machine-config-daemon-zxlh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 01:25:21 crc kubenswrapper[4990]: I1205 01:25:21.823515 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 01:25:22 crc kubenswrapper[4990]: I1205 01:25:22.400957 4990 scope.go:117] "RemoveContainer" containerID="2cb934aa0cb867865c3cc63541e39eaa488349656fdbb8df851d66001a971602" Dec 05 01:25:22 crc kubenswrapper[4990]: I1205 01:25:22.741988 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rdhk7_c4914133-b0cd-4d12-84d5-c99379e2324a/kube-multus/2.log" Dec 05 01:25:22 crc kubenswrapper[4990]: I1205 01:25:22.744462 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-f48rv" event={"ID":"2ddd952b-ae34-497f-b7d0-e428cb8eb66a","Type":"ContainerStarted","Data":"46cf3d0e82039497a5b49c876431df216203c10ccda429c4049e115e8052eae5"} Dec 05 01:25:22 crc kubenswrapper[4990]: I1205 01:25:22.765285 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-f48rv" podStartSLOduration=2.712062177 podStartE2EDuration="4.765261169s" podCreationTimestamp="2025-12-05 01:25:18 +0000 UTC" firstStartedPulling="2025-12-05 01:25:20.102885387 +0000 UTC m=+1018.479100758" lastFinishedPulling="2025-12-05 01:25:22.156084379 +0000 UTC m=+1020.532299750" observedRunningTime="2025-12-05 01:25:22.760905716 +0000 UTC m=+1021.137121097" watchObservedRunningTime="2025-12-05 01:25:22.765261169 +0000 UTC m=+1021.141476530" Dec 05 01:25:23 crc kubenswrapper[4990]: I1205 01:25:23.756995 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-wckwv" event={"ID":"5ee24fe7-1614-4d3b-8501-c7c1cdf4449f","Type":"ContainerStarted","Data":"b4fce21f98ae0b48c9e72bd3a795a5687bd7ff2d496980cc062cff4079ced52c"} Dec 05 01:25:23 crc kubenswrapper[4990]: I1205 01:25:23.788341 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-wckwv" podStartSLOduration=1.92780064 podStartE2EDuration="5.78830792s" podCreationTimestamp="2025-12-05 01:25:18 +0000 UTC" firstStartedPulling="2025-12-05 01:25:19.319189066 +0000 UTC m=+1017.695404417" lastFinishedPulling="2025-12-05 01:25:23.179696336 +0000 UTC m=+1021.555911697" observedRunningTime="2025-12-05 01:25:23.782815064 +0000 UTC m=+1022.159030505" watchObservedRunningTime="2025-12-05 01:25:23.78830792 +0000 UTC m=+1022.164523351" Dec 05 01:25:29 crc kubenswrapper[4990]: I1205 01:25:29.186052 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-mgpb7" Dec 05 01:25:29 crc kubenswrapper[4990]: I1205 01:25:29.416219 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5dfc497f9-z4q58" Dec 05 01:25:29 crc kubenswrapper[4990]: I1205 01:25:29.416727 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5dfc497f9-z4q58" Dec 05 01:25:29 crc kubenswrapper[4990]: I1205 01:25:29.423418 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5dfc497f9-z4q58" Dec 05 01:25:29 crc kubenswrapper[4990]: I1205 01:25:29.802754 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5dfc497f9-z4q58" Dec 05 01:25:29 crc kubenswrapper[4990]: I1205 01:25:29.881704 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-g6z24"] Dec 05 01:25:39 crc kubenswrapper[4990]: I1205 01:25:39.153088 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-w7vq9" Dec 05 01:25:51 crc kubenswrapper[4990]: I1205 01:25:51.824740 4990 patch_prober.go:28] interesting pod/machine-config-daemon-zxlh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 01:25:51 crc kubenswrapper[4990]: I1205 01:25:51.825466 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 01:25:51 crc kubenswrapper[4990]: I1205 01:25:51.825545 4990 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" Dec 05 01:25:51 crc kubenswrapper[4990]: I1205 01:25:51.826042 4990 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f2b4a96536639cbb9d3bc8aec6f26003832337aeb02bfe5ac6cc1d82eae2a27"} pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 01:25:51 crc kubenswrapper[4990]: I1205 01:25:51.826107 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" containerID="cri-o://6f2b4a96536639cbb9d3bc8aec6f26003832337aeb02bfe5ac6cc1d82eae2a27" gracePeriod=600 Dec 05 01:25:52 crc kubenswrapper[4990]: I1205 01:25:52.971922 4990 generic.go:334] "Generic (PLEG): container finished" podID="b6580a04-67de-48f9-9da2-56cb4377af48" containerID="6f2b4a96536639cbb9d3bc8aec6f26003832337aeb02bfe5ac6cc1d82eae2a27" exitCode=0 Dec 05 01:25:52 crc kubenswrapper[4990]: I1205 01:25:52.971977 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" event={"ID":"b6580a04-67de-48f9-9da2-56cb4377af48","Type":"ContainerDied","Data":"6f2b4a96536639cbb9d3bc8aec6f26003832337aeb02bfe5ac6cc1d82eae2a27"} Dec 05 01:25:52 crc kubenswrapper[4990]: I1205 01:25:52.972580 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" event={"ID":"b6580a04-67de-48f9-9da2-56cb4377af48","Type":"ContainerStarted","Data":"ff6ba92961791b172f695a12e8eb19f33bc6e8ba78d861452310d9615b6fa761"} Dec 05 01:25:52 crc kubenswrapper[4990]: I1205 01:25:52.972612 4990 scope.go:117] "RemoveContainer" containerID="e80af07b88563f0c4908362eee70be4cc7b74f59335c734b90ad5639312c2fd8" Dec 05 01:25:54 crc kubenswrapper[4990]: I1205 01:25:54.204703 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9"] Dec 05 01:25:54 crc kubenswrapper[4990]: I1205 01:25:54.206069 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9" Dec 05 01:25:54 crc kubenswrapper[4990]: I1205 01:25:54.211477 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 05 01:25:54 crc kubenswrapper[4990]: I1205 01:25:54.216384 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9"] Dec 05 01:25:54 crc kubenswrapper[4990]: I1205 01:25:54.347731 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f17d56f5-716f-4187-b328-abee78a41a82-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9\" (UID: \"f17d56f5-716f-4187-b328-abee78a41a82\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9" Dec 05 01:25:54 crc kubenswrapper[4990]: I1205 01:25:54.348014 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkvsz\" (UniqueName: \"kubernetes.io/projected/f17d56f5-716f-4187-b328-abee78a41a82-kube-api-access-dkvsz\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9\" (UID: \"f17d56f5-716f-4187-b328-abee78a41a82\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9" Dec 05 01:25:54 crc kubenswrapper[4990]: I1205 01:25:54.348079 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f17d56f5-716f-4187-b328-abee78a41a82-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9\" (UID: \"f17d56f5-716f-4187-b328-abee78a41a82\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9" Dec 05 01:25:54 crc kubenswrapper[4990]: I1205 01:25:54.449047 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f17d56f5-716f-4187-b328-abee78a41a82-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9\" (UID: \"f17d56f5-716f-4187-b328-abee78a41a82\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9" Dec 05 01:25:54 crc kubenswrapper[4990]: I1205 01:25:54.449102 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkvsz\" (UniqueName: \"kubernetes.io/projected/f17d56f5-716f-4187-b328-abee78a41a82-kube-api-access-dkvsz\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9\" (UID: \"f17d56f5-716f-4187-b328-abee78a41a82\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9" Dec 05 01:25:54 crc kubenswrapper[4990]: I1205 01:25:54.449187 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f17d56f5-716f-4187-b328-abee78a41a82-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9\" (UID: \"f17d56f5-716f-4187-b328-abee78a41a82\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9" Dec 05 01:25:54 crc kubenswrapper[4990]: I1205 01:25:54.449526 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f17d56f5-716f-4187-b328-abee78a41a82-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9\" (UID: \"f17d56f5-716f-4187-b328-abee78a41a82\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9" Dec 05 01:25:54 crc kubenswrapper[4990]: I1205 01:25:54.449629 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f17d56f5-716f-4187-b328-abee78a41a82-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9\" (UID: \"f17d56f5-716f-4187-b328-abee78a41a82\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9" Dec 05 01:25:54 crc kubenswrapper[4990]: I1205 01:25:54.469633 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkvsz\" (UniqueName: \"kubernetes.io/projected/f17d56f5-716f-4187-b328-abee78a41a82-kube-api-access-dkvsz\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9\" (UID: \"f17d56f5-716f-4187-b328-abee78a41a82\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9" Dec 05 01:25:54 crc kubenswrapper[4990]: I1205 01:25:54.521112 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9" Dec 05 01:25:54 crc kubenswrapper[4990]: I1205 01:25:54.720231 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9"] Dec 05 01:25:54 crc kubenswrapper[4990]: I1205 01:25:54.946219 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-g6z24" podUID="b8bb3b38-72ab-4295-8b62-99f5f424c711" containerName="console" containerID="cri-o://8a1395a2aa602fb3d1fd4560f54467adef50120d99c9eb431c62a43e4162667b" gracePeriod=15 Dec 05 01:25:54 crc kubenswrapper[4990]: I1205 01:25:54.989174 4990 generic.go:334] "Generic (PLEG): container finished" podID="f17d56f5-716f-4187-b328-abee78a41a82" containerID="429506947cdc69c9b3997045cf44f11130976ad93532a5847e5a04ee10e68ff9" exitCode=0 Dec 05 01:25:54 crc kubenswrapper[4990]: I1205 01:25:54.989258 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9" event={"ID":"f17d56f5-716f-4187-b328-abee78a41a82","Type":"ContainerDied","Data":"429506947cdc69c9b3997045cf44f11130976ad93532a5847e5a04ee10e68ff9"} Dec 05 01:25:54 crc kubenswrapper[4990]: I1205 01:25:54.989317 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9" event={"ID":"f17d56f5-716f-4187-b328-abee78a41a82","Type":"ContainerStarted","Data":"2f968e2dd0c55706a52d6abf096e495a7766194da51323058abd7b8e9f826191"} Dec 05 01:25:55 crc kubenswrapper[4990]: I1205 01:25:55.349084 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-g6z24_b8bb3b38-72ab-4295-8b62-99f5f424c711/console/0.log" Dec 05 01:25:55 crc kubenswrapper[4990]: I1205 01:25:55.349178 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-g6z24" Dec 05 01:25:55 crc kubenswrapper[4990]: I1205 01:25:55.460985 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b8bb3b38-72ab-4295-8b62-99f5f424c711-console-oauth-config\") pod \"b8bb3b38-72ab-4295-8b62-99f5f424c711\" (UID: \"b8bb3b38-72ab-4295-8b62-99f5f424c711\") " Dec 05 01:25:55 crc kubenswrapper[4990]: I1205 01:25:55.461050 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b8bb3b38-72ab-4295-8b62-99f5f424c711-console-config\") pod \"b8bb3b38-72ab-4295-8b62-99f5f424c711\" (UID: \"b8bb3b38-72ab-4295-8b62-99f5f424c711\") " Dec 05 01:25:55 crc kubenswrapper[4990]: I1205 01:25:55.461091 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b8bb3b38-72ab-4295-8b62-99f5f424c711-service-ca\") pod \"b8bb3b38-72ab-4295-8b62-99f5f424c711\" (UID: \"b8bb3b38-72ab-4295-8b62-99f5f424c711\") " Dec 05 01:25:55 crc kubenswrapper[4990]: I1205 01:25:55.461229 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8bb3b38-72ab-4295-8b62-99f5f424c711-trusted-ca-bundle\") pod \"b8bb3b38-72ab-4295-8b62-99f5f424c711\" (UID: \"b8bb3b38-72ab-4295-8b62-99f5f424c711\") " Dec 05 01:25:55 crc kubenswrapper[4990]: I1205 01:25:55.461305 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcl84\" (UniqueName: \"kubernetes.io/projected/b8bb3b38-72ab-4295-8b62-99f5f424c711-kube-api-access-zcl84\") pod \"b8bb3b38-72ab-4295-8b62-99f5f424c711\" (UID: \"b8bb3b38-72ab-4295-8b62-99f5f424c711\") " Dec 05 01:25:55 crc kubenswrapper[4990]: I1205 01:25:55.461338 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8bb3b38-72ab-4295-8b62-99f5f424c711-console-serving-cert\") pod \"b8bb3b38-72ab-4295-8b62-99f5f424c711\" (UID: \"b8bb3b38-72ab-4295-8b62-99f5f424c711\") " Dec 05 01:25:55 crc kubenswrapper[4990]: I1205 01:25:55.461406 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b8bb3b38-72ab-4295-8b62-99f5f424c711-oauth-serving-cert\") pod \"b8bb3b38-72ab-4295-8b62-99f5f424c711\" (UID: \"b8bb3b38-72ab-4295-8b62-99f5f424c711\") " Dec 05 01:25:55 crc kubenswrapper[4990]: I1205 01:25:55.462067 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8bb3b38-72ab-4295-8b62-99f5f424c711-console-config" (OuterVolumeSpecName: "console-config") pod "b8bb3b38-72ab-4295-8b62-99f5f424c711" (UID: "b8bb3b38-72ab-4295-8b62-99f5f424c711"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:25:55 crc kubenswrapper[4990]: I1205 01:25:55.462348 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8bb3b38-72ab-4295-8b62-99f5f424c711-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "b8bb3b38-72ab-4295-8b62-99f5f424c711" (UID: "b8bb3b38-72ab-4295-8b62-99f5f424c711"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:25:55 crc kubenswrapper[4990]: I1205 01:25:55.462745 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8bb3b38-72ab-4295-8b62-99f5f424c711-service-ca" (OuterVolumeSpecName: "service-ca") pod "b8bb3b38-72ab-4295-8b62-99f5f424c711" (UID: "b8bb3b38-72ab-4295-8b62-99f5f424c711"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:25:55 crc kubenswrapper[4990]: I1205 01:25:55.462859 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8bb3b38-72ab-4295-8b62-99f5f424c711-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "b8bb3b38-72ab-4295-8b62-99f5f424c711" (UID: "b8bb3b38-72ab-4295-8b62-99f5f424c711"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:25:55 crc kubenswrapper[4990]: I1205 01:25:55.470237 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8bb3b38-72ab-4295-8b62-99f5f424c711-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "b8bb3b38-72ab-4295-8b62-99f5f424c711" (UID: "b8bb3b38-72ab-4295-8b62-99f5f424c711"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:25:55 crc kubenswrapper[4990]: I1205 01:25:55.470334 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8bb3b38-72ab-4295-8b62-99f5f424c711-kube-api-access-zcl84" (OuterVolumeSpecName: "kube-api-access-zcl84") pod "b8bb3b38-72ab-4295-8b62-99f5f424c711" (UID: "b8bb3b38-72ab-4295-8b62-99f5f424c711"). InnerVolumeSpecName "kube-api-access-zcl84". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:25:55 crc kubenswrapper[4990]: I1205 01:25:55.470653 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8bb3b38-72ab-4295-8b62-99f5f424c711-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "b8bb3b38-72ab-4295-8b62-99f5f424c711" (UID: "b8bb3b38-72ab-4295-8b62-99f5f424c711"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:25:55 crc kubenswrapper[4990]: I1205 01:25:55.563411 4990 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b8bb3b38-72ab-4295-8b62-99f5f424c711-service-ca\") on node \"crc\" DevicePath \"\"" Dec 05 01:25:55 crc kubenswrapper[4990]: I1205 01:25:55.563464 4990 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8bb3b38-72ab-4295-8b62-99f5f424c711-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:25:55 crc kubenswrapper[4990]: I1205 01:25:55.563515 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcl84\" (UniqueName: \"kubernetes.io/projected/b8bb3b38-72ab-4295-8b62-99f5f424c711-kube-api-access-zcl84\") on node \"crc\" DevicePath \"\"" Dec 05 01:25:55 crc kubenswrapper[4990]: I1205 01:25:55.563537 4990 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8bb3b38-72ab-4295-8b62-99f5f424c711-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:25:55 crc kubenswrapper[4990]: I1205 01:25:55.563554 4990 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b8bb3b38-72ab-4295-8b62-99f5f424c711-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 01:25:55 crc kubenswrapper[4990]: I1205 01:25:55.563570 4990 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b8bb3b38-72ab-4295-8b62-99f5f424c711-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:25:55 crc kubenswrapper[4990]: I1205 01:25:55.563587 4990 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b8bb3b38-72ab-4295-8b62-99f5f424c711-console-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:25:55 crc kubenswrapper[4990]: I1205 01:25:55.998570 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-g6z24_b8bb3b38-72ab-4295-8b62-99f5f424c711/console/0.log" Dec 05 01:25:55 crc kubenswrapper[4990]: I1205 01:25:55.998776 4990 generic.go:334] "Generic (PLEG): container finished" podID="b8bb3b38-72ab-4295-8b62-99f5f424c711" containerID="8a1395a2aa602fb3d1fd4560f54467adef50120d99c9eb431c62a43e4162667b" exitCode=2 Dec 05 01:25:55 crc kubenswrapper[4990]: I1205 01:25:55.998818 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-g6z24" event={"ID":"b8bb3b38-72ab-4295-8b62-99f5f424c711","Type":"ContainerDied","Data":"8a1395a2aa602fb3d1fd4560f54467adef50120d99c9eb431c62a43e4162667b"} Dec 05 01:25:55 crc kubenswrapper[4990]: I1205 01:25:55.998852 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-g6z24" event={"ID":"b8bb3b38-72ab-4295-8b62-99f5f424c711","Type":"ContainerDied","Data":"30f43f002aa1a146c6ddcad51c6181741e5c645ab00d5fb1e2311abd6bc354e6"} Dec 05 01:25:55 crc kubenswrapper[4990]: I1205 01:25:55.998877 4990 scope.go:117] "RemoveContainer" containerID="8a1395a2aa602fb3d1fd4560f54467adef50120d99c9eb431c62a43e4162667b" Dec 05 01:25:55 crc kubenswrapper[4990]: I1205 01:25:55.998923 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-g6z24" Dec 05 01:25:56 crc kubenswrapper[4990]: I1205 01:25:56.028452 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-g6z24"] Dec 05 01:25:56 crc kubenswrapper[4990]: I1205 01:25:56.035088 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-g6z24"] Dec 05 01:25:56 crc kubenswrapper[4990]: I1205 01:25:56.182476 4990 scope.go:117] "RemoveContainer" containerID="8a1395a2aa602fb3d1fd4560f54467adef50120d99c9eb431c62a43e4162667b" Dec 05 01:25:56 crc kubenswrapper[4990]: E1205 01:25:56.183073 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a1395a2aa602fb3d1fd4560f54467adef50120d99c9eb431c62a43e4162667b\": container with ID starting with 8a1395a2aa602fb3d1fd4560f54467adef50120d99c9eb431c62a43e4162667b not found: ID does not exist" containerID="8a1395a2aa602fb3d1fd4560f54467adef50120d99c9eb431c62a43e4162667b" Dec 05 01:25:56 crc kubenswrapper[4990]: I1205 01:25:56.183104 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a1395a2aa602fb3d1fd4560f54467adef50120d99c9eb431c62a43e4162667b"} err="failed to get container status \"8a1395a2aa602fb3d1fd4560f54467adef50120d99c9eb431c62a43e4162667b\": rpc error: code = NotFound desc = could not find container \"8a1395a2aa602fb3d1fd4560f54467adef50120d99c9eb431c62a43e4162667b\": container with ID starting with 8a1395a2aa602fb3d1fd4560f54467adef50120d99c9eb431c62a43e4162667b not found: ID does not exist" Dec 05 01:25:57 crc kubenswrapper[4990]: I1205 01:25:57.010833 4990 generic.go:334] "Generic (PLEG): container finished" podID="f17d56f5-716f-4187-b328-abee78a41a82" containerID="f376547b13d7590e85f502375c5ce067bd3f77f18447f2396faf0ad7cd4ae30b" exitCode=0 Dec 05 01:25:57 crc kubenswrapper[4990]: I1205 01:25:57.011078 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9" event={"ID":"f17d56f5-716f-4187-b328-abee78a41a82","Type":"ContainerDied","Data":"f376547b13d7590e85f502375c5ce067bd3f77f18447f2396faf0ad7cd4ae30b"} Dec 05 01:25:57 crc kubenswrapper[4990]: E1205 01:25:57.353718 4990 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf17d56f5_716f_4187_b328_abee78a41a82.slice/crio-8c07114a8319bef03852737a965321399a75d58f4e30acfbef8749eaa5a6e8e2.scope\": RecentStats: unable to find data in memory cache]" Dec 05 01:25:57 crc kubenswrapper[4990]: I1205 01:25:57.942575 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8bb3b38-72ab-4295-8b62-99f5f424c711" path="/var/lib/kubelet/pods/b8bb3b38-72ab-4295-8b62-99f5f424c711/volumes" Dec 05 01:25:58 crc kubenswrapper[4990]: I1205 01:25:58.020625 4990 generic.go:334] "Generic (PLEG): container finished" podID="f17d56f5-716f-4187-b328-abee78a41a82" containerID="8c07114a8319bef03852737a965321399a75d58f4e30acfbef8749eaa5a6e8e2" exitCode=0 Dec 05 01:25:58 crc kubenswrapper[4990]: I1205 01:25:58.020687 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9" event={"ID":"f17d56f5-716f-4187-b328-abee78a41a82","Type":"ContainerDied","Data":"8c07114a8319bef03852737a965321399a75d58f4e30acfbef8749eaa5a6e8e2"} Dec 05 01:25:59 crc kubenswrapper[4990]: I1205 01:25:59.353296 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9" Dec 05 01:25:59 crc kubenswrapper[4990]: I1205 01:25:59.415262 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f17d56f5-716f-4187-b328-abee78a41a82-util\") pod \"f17d56f5-716f-4187-b328-abee78a41a82\" (UID: \"f17d56f5-716f-4187-b328-abee78a41a82\") " Dec 05 01:25:59 crc kubenswrapper[4990]: I1205 01:25:59.415332 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkvsz\" (UniqueName: \"kubernetes.io/projected/f17d56f5-716f-4187-b328-abee78a41a82-kube-api-access-dkvsz\") pod \"f17d56f5-716f-4187-b328-abee78a41a82\" (UID: \"f17d56f5-716f-4187-b328-abee78a41a82\") " Dec 05 01:25:59 crc kubenswrapper[4990]: I1205 01:25:59.415428 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f17d56f5-716f-4187-b328-abee78a41a82-bundle\") pod \"f17d56f5-716f-4187-b328-abee78a41a82\" (UID: \"f17d56f5-716f-4187-b328-abee78a41a82\") " Dec 05 01:25:59 crc kubenswrapper[4990]: I1205 01:25:59.417307 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f17d56f5-716f-4187-b328-abee78a41a82-bundle" (OuterVolumeSpecName: "bundle") pod "f17d56f5-716f-4187-b328-abee78a41a82" (UID: "f17d56f5-716f-4187-b328-abee78a41a82"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:25:59 crc kubenswrapper[4990]: I1205 01:25:59.424232 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f17d56f5-716f-4187-b328-abee78a41a82-kube-api-access-dkvsz" (OuterVolumeSpecName: "kube-api-access-dkvsz") pod "f17d56f5-716f-4187-b328-abee78a41a82" (UID: "f17d56f5-716f-4187-b328-abee78a41a82"). InnerVolumeSpecName "kube-api-access-dkvsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:25:59 crc kubenswrapper[4990]: I1205 01:25:59.433367 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f17d56f5-716f-4187-b328-abee78a41a82-util" (OuterVolumeSpecName: "util") pod "f17d56f5-716f-4187-b328-abee78a41a82" (UID: "f17d56f5-716f-4187-b328-abee78a41a82"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:25:59 crc kubenswrapper[4990]: I1205 01:25:59.517873 4990 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f17d56f5-716f-4187-b328-abee78a41a82-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:25:59 crc kubenswrapper[4990]: I1205 01:25:59.518126 4990 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f17d56f5-716f-4187-b328-abee78a41a82-util\") on node \"crc\" DevicePath \"\"" Dec 05 01:25:59 crc kubenswrapper[4990]: I1205 01:25:59.518234 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkvsz\" (UniqueName: \"kubernetes.io/projected/f17d56f5-716f-4187-b328-abee78a41a82-kube-api-access-dkvsz\") on node \"crc\" DevicePath \"\"" Dec 05 01:26:00 crc kubenswrapper[4990]: I1205 01:26:00.039124 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9" event={"ID":"f17d56f5-716f-4187-b328-abee78a41a82","Type":"ContainerDied","Data":"2f968e2dd0c55706a52d6abf096e495a7766194da51323058abd7b8e9f826191"} Dec 05 01:26:00 crc kubenswrapper[4990]: I1205 01:26:00.039191 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9" Dec 05 01:26:00 crc kubenswrapper[4990]: I1205 01:26:00.039194 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f968e2dd0c55706a52d6abf096e495a7766194da51323058abd7b8e9f826191" Dec 05 01:26:08 crc kubenswrapper[4990]: I1205 01:26:08.937506 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7558d5d6d4-pk6pd"] Dec 05 01:26:08 crc kubenswrapper[4990]: E1205 01:26:08.938259 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8bb3b38-72ab-4295-8b62-99f5f424c711" containerName="console" Dec 05 01:26:08 crc kubenswrapper[4990]: I1205 01:26:08.938270 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8bb3b38-72ab-4295-8b62-99f5f424c711" containerName="console" Dec 05 01:26:08 crc kubenswrapper[4990]: E1205 01:26:08.938282 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f17d56f5-716f-4187-b328-abee78a41a82" containerName="util" Dec 05 01:26:08 crc kubenswrapper[4990]: I1205 01:26:08.938288 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="f17d56f5-716f-4187-b328-abee78a41a82" containerName="util" Dec 05 01:26:08 crc kubenswrapper[4990]: E1205 01:26:08.938295 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f17d56f5-716f-4187-b328-abee78a41a82" containerName="extract" Dec 05 01:26:08 crc kubenswrapper[4990]: I1205 01:26:08.938300 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="f17d56f5-716f-4187-b328-abee78a41a82" containerName="extract" Dec 05 01:26:08 crc kubenswrapper[4990]: E1205 01:26:08.938309 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f17d56f5-716f-4187-b328-abee78a41a82" containerName="pull" Dec 05 01:26:08 crc kubenswrapper[4990]: I1205 01:26:08.938316 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="f17d56f5-716f-4187-b328-abee78a41a82" containerName="pull" Dec 05 01:26:08 crc kubenswrapper[4990]: I1205 01:26:08.938418 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8bb3b38-72ab-4295-8b62-99f5f424c711" containerName="console" Dec 05 01:26:08 crc kubenswrapper[4990]: I1205 01:26:08.938429 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="f17d56f5-716f-4187-b328-abee78a41a82" containerName="extract" Dec 05 01:26:08 crc kubenswrapper[4990]: I1205 01:26:08.938812 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7558d5d6d4-pk6pd" Dec 05 01:26:08 crc kubenswrapper[4990]: I1205 01:26:08.941691 4990 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 05 01:26:08 crc kubenswrapper[4990]: I1205 01:26:08.941765 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 05 01:26:08 crc kubenswrapper[4990]: I1205 01:26:08.941792 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 05 01:26:08 crc kubenswrapper[4990]: I1205 01:26:08.942020 4990 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 05 01:26:08 crc kubenswrapper[4990]: I1205 01:26:08.942133 4990 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-v59rj" Dec 05 01:26:08 crc kubenswrapper[4990]: I1205 01:26:08.955562 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7558d5d6d4-pk6pd"] Dec 05 01:26:09 crc kubenswrapper[4990]: I1205 01:26:09.035888 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll6cl\" (UniqueName: \"kubernetes.io/projected/8e061a9c-0157-408c-85e1-bec1856d263e-kube-api-access-ll6cl\") pod \"metallb-operator-controller-manager-7558d5d6d4-pk6pd\" (UID: \"8e061a9c-0157-408c-85e1-bec1856d263e\") " pod="metallb-system/metallb-operator-controller-manager-7558d5d6d4-pk6pd" Dec 05 01:26:09 crc kubenswrapper[4990]: I1205 01:26:09.035945 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8e061a9c-0157-408c-85e1-bec1856d263e-apiservice-cert\") pod \"metallb-operator-controller-manager-7558d5d6d4-pk6pd\" (UID: \"8e061a9c-0157-408c-85e1-bec1856d263e\") " pod="metallb-system/metallb-operator-controller-manager-7558d5d6d4-pk6pd" Dec 05 01:26:09 crc kubenswrapper[4990]: I1205 01:26:09.035964 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8e061a9c-0157-408c-85e1-bec1856d263e-webhook-cert\") pod \"metallb-operator-controller-manager-7558d5d6d4-pk6pd\" (UID: \"8e061a9c-0157-408c-85e1-bec1856d263e\") " pod="metallb-system/metallb-operator-controller-manager-7558d5d6d4-pk6pd" Dec 05 01:26:09 crc kubenswrapper[4990]: I1205 01:26:09.137007 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8e061a9c-0157-408c-85e1-bec1856d263e-apiservice-cert\") pod \"metallb-operator-controller-manager-7558d5d6d4-pk6pd\" (UID: \"8e061a9c-0157-408c-85e1-bec1856d263e\") " pod="metallb-system/metallb-operator-controller-manager-7558d5d6d4-pk6pd" Dec 05 01:26:09 crc kubenswrapper[4990]: I1205 01:26:09.137244 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8e061a9c-0157-408c-85e1-bec1856d263e-webhook-cert\") pod \"metallb-operator-controller-manager-7558d5d6d4-pk6pd\" (UID: \"8e061a9c-0157-408c-85e1-bec1856d263e\") " pod="metallb-system/metallb-operator-controller-manager-7558d5d6d4-pk6pd" Dec 05 01:26:09 crc kubenswrapper[4990]: I1205 01:26:09.137324 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll6cl\" (UniqueName: \"kubernetes.io/projected/8e061a9c-0157-408c-85e1-bec1856d263e-kube-api-access-ll6cl\") pod \"metallb-operator-controller-manager-7558d5d6d4-pk6pd\" (UID: \"8e061a9c-0157-408c-85e1-bec1856d263e\") " pod="metallb-system/metallb-operator-controller-manager-7558d5d6d4-pk6pd" Dec 05 01:26:09 crc kubenswrapper[4990]: I1205 01:26:09.144123 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8e061a9c-0157-408c-85e1-bec1856d263e-webhook-cert\") pod \"metallb-operator-controller-manager-7558d5d6d4-pk6pd\" (UID: \"8e061a9c-0157-408c-85e1-bec1856d263e\") " pod="metallb-system/metallb-operator-controller-manager-7558d5d6d4-pk6pd" Dec 05 01:26:09 crc kubenswrapper[4990]: I1205 01:26:09.153868 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll6cl\" (UniqueName: \"kubernetes.io/projected/8e061a9c-0157-408c-85e1-bec1856d263e-kube-api-access-ll6cl\") pod \"metallb-operator-controller-manager-7558d5d6d4-pk6pd\" (UID: \"8e061a9c-0157-408c-85e1-bec1856d263e\") " pod="metallb-system/metallb-operator-controller-manager-7558d5d6d4-pk6pd" Dec 05 01:26:09 crc kubenswrapper[4990]: I1205 01:26:09.159336 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8e061a9c-0157-408c-85e1-bec1856d263e-apiservice-cert\") pod \"metallb-operator-controller-manager-7558d5d6d4-pk6pd\" (UID: \"8e061a9c-0157-408c-85e1-bec1856d263e\") " pod="metallb-system/metallb-operator-controller-manager-7558d5d6d4-pk6pd" Dec 05 01:26:09 crc kubenswrapper[4990]: I1205 01:26:09.254456 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7558d5d6d4-pk6pd" Dec 05 01:26:09 crc kubenswrapper[4990]: I1205 01:26:09.256412 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5fd747f769-nctkb"] Dec 05 01:26:09 crc kubenswrapper[4990]: I1205 01:26:09.257415 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5fd747f769-nctkb" Dec 05 01:26:09 crc kubenswrapper[4990]: I1205 01:26:09.259831 4990 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-284nw" Dec 05 01:26:09 crc kubenswrapper[4990]: I1205 01:26:09.259990 4990 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 05 01:26:09 crc kubenswrapper[4990]: I1205 01:26:09.266326 4990 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 05 01:26:09 crc kubenswrapper[4990]: I1205 01:26:09.281855 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5fd747f769-nctkb"] Dec 05 01:26:09 crc kubenswrapper[4990]: I1205 01:26:09.340369 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7bb0a2c9-b86f-4393-a0f8-30e0d52aac17-webhook-cert\") pod \"metallb-operator-webhook-server-5fd747f769-nctkb\" (UID: \"7bb0a2c9-b86f-4393-a0f8-30e0d52aac17\") " pod="metallb-system/metallb-operator-webhook-server-5fd747f769-nctkb" Dec 05 01:26:09 crc kubenswrapper[4990]: I1205 01:26:09.340411 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzzzs\" (UniqueName: \"kubernetes.io/projected/7bb0a2c9-b86f-4393-a0f8-30e0d52aac17-kube-api-access-pzzzs\") pod \"metallb-operator-webhook-server-5fd747f769-nctkb\" (UID: \"7bb0a2c9-b86f-4393-a0f8-30e0d52aac17\") " pod="metallb-system/metallb-operator-webhook-server-5fd747f769-nctkb" Dec 05 01:26:09 crc kubenswrapper[4990]: I1205 01:26:09.340636 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7bb0a2c9-b86f-4393-a0f8-30e0d52aac17-apiservice-cert\") pod \"metallb-operator-webhook-server-5fd747f769-nctkb\" (UID: \"7bb0a2c9-b86f-4393-a0f8-30e0d52aac17\") " pod="metallb-system/metallb-operator-webhook-server-5fd747f769-nctkb" Dec 05 01:26:09 crc kubenswrapper[4990]: I1205 01:26:09.441979 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7bb0a2c9-b86f-4393-a0f8-30e0d52aac17-apiservice-cert\") pod \"metallb-operator-webhook-server-5fd747f769-nctkb\" (UID: \"7bb0a2c9-b86f-4393-a0f8-30e0d52aac17\") " pod="metallb-system/metallb-operator-webhook-server-5fd747f769-nctkb" Dec 05 01:26:09 crc kubenswrapper[4990]: I1205 01:26:09.442043 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7bb0a2c9-b86f-4393-a0f8-30e0d52aac17-webhook-cert\") pod \"metallb-operator-webhook-server-5fd747f769-nctkb\" (UID: \"7bb0a2c9-b86f-4393-a0f8-30e0d52aac17\") " pod="metallb-system/metallb-operator-webhook-server-5fd747f769-nctkb" Dec 05 01:26:09 crc kubenswrapper[4990]: I1205 01:26:09.442127 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzzzs\" (UniqueName: \"kubernetes.io/projected/7bb0a2c9-b86f-4393-a0f8-30e0d52aac17-kube-api-access-pzzzs\") pod \"metallb-operator-webhook-server-5fd747f769-nctkb\" (UID: \"7bb0a2c9-b86f-4393-a0f8-30e0d52aac17\") " pod="metallb-system/metallb-operator-webhook-server-5fd747f769-nctkb" Dec 05 01:26:09 crc kubenswrapper[4990]: I1205 01:26:09.446065 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7bb0a2c9-b86f-4393-a0f8-30e0d52aac17-apiservice-cert\") pod \"metallb-operator-webhook-server-5fd747f769-nctkb\" (UID: \"7bb0a2c9-b86f-4393-a0f8-30e0d52aac17\") " pod="metallb-system/metallb-operator-webhook-server-5fd747f769-nctkb" Dec 05 01:26:09 crc kubenswrapper[4990]: I1205 01:26:09.454453 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7bb0a2c9-b86f-4393-a0f8-30e0d52aac17-webhook-cert\") pod \"metallb-operator-webhook-server-5fd747f769-nctkb\" (UID: \"7bb0a2c9-b86f-4393-a0f8-30e0d52aac17\") " pod="metallb-system/metallb-operator-webhook-server-5fd747f769-nctkb" Dec 05 01:26:09 crc kubenswrapper[4990]: I1205 01:26:09.457381 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzzzs\" (UniqueName: \"kubernetes.io/projected/7bb0a2c9-b86f-4393-a0f8-30e0d52aac17-kube-api-access-pzzzs\") pod \"metallb-operator-webhook-server-5fd747f769-nctkb\" (UID: \"7bb0a2c9-b86f-4393-a0f8-30e0d52aac17\") " pod="metallb-system/metallb-operator-webhook-server-5fd747f769-nctkb" Dec 05 01:26:09 crc kubenswrapper[4990]: I1205 01:26:09.604116 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5fd747f769-nctkb" Dec 05 01:26:09 crc kubenswrapper[4990]: I1205 01:26:09.687743 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7558d5d6d4-pk6pd"] Dec 05 01:26:09 crc kubenswrapper[4990]: W1205 01:26:09.694915 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e061a9c_0157_408c_85e1_bec1856d263e.slice/crio-f65ac7cb238ce64d53ff4cedffc5ba31522fff1c31522e0a93b2f86b60db9087 WatchSource:0}: Error finding container f65ac7cb238ce64d53ff4cedffc5ba31522fff1c31522e0a93b2f86b60db9087: Status 404 returned error can't find the container with id f65ac7cb238ce64d53ff4cedffc5ba31522fff1c31522e0a93b2f86b60db9087 Dec 05 01:26:09 crc kubenswrapper[4990]: I1205 01:26:09.783411 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5fd747f769-nctkb"] Dec 05 01:26:09 crc kubenswrapper[4990]: W1205 01:26:09.789697 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bb0a2c9_b86f_4393_a0f8_30e0d52aac17.slice/crio-dcfba0d714fa7b26ff587cea7686c0f51b0a9b14f4fbcac6521b401a6eb239f6 WatchSource:0}: Error finding container dcfba0d714fa7b26ff587cea7686c0f51b0a9b14f4fbcac6521b401a6eb239f6: Status 404 returned error can't find the container with id dcfba0d714fa7b26ff587cea7686c0f51b0a9b14f4fbcac6521b401a6eb239f6 Dec 05 01:26:10 crc kubenswrapper[4990]: I1205 01:26:10.092315 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7558d5d6d4-pk6pd" event={"ID":"8e061a9c-0157-408c-85e1-bec1856d263e","Type":"ContainerStarted","Data":"f65ac7cb238ce64d53ff4cedffc5ba31522fff1c31522e0a93b2f86b60db9087"} Dec 05 01:26:10 crc kubenswrapper[4990]: I1205 01:26:10.093643 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5fd747f769-nctkb" event={"ID":"7bb0a2c9-b86f-4393-a0f8-30e0d52aac17","Type":"ContainerStarted","Data":"dcfba0d714fa7b26ff587cea7686c0f51b0a9b14f4fbcac6521b401a6eb239f6"} Dec 05 01:26:14 crc kubenswrapper[4990]: I1205 01:26:14.126959 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7558d5d6d4-pk6pd" event={"ID":"8e061a9c-0157-408c-85e1-bec1856d263e","Type":"ContainerStarted","Data":"9d27387d411d564c12191766541e2c3c3a23dbfc06a8438327a37c9ee71286fa"} Dec 05 01:26:14 crc kubenswrapper[4990]: I1205 01:26:14.127698 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7558d5d6d4-pk6pd" Dec 05 01:26:14 crc kubenswrapper[4990]: I1205 01:26:14.128433 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5fd747f769-nctkb" event={"ID":"7bb0a2c9-b86f-4393-a0f8-30e0d52aac17","Type":"ContainerStarted","Data":"0152a70c5b6a7e4d89b5737527a2c9f1b1c653fe2d6d7b918827f34a1cbb8dd4"} Dec 05 01:26:14 crc kubenswrapper[4990]: I1205 01:26:14.128984 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5fd747f769-nctkb" Dec 05 01:26:14 crc kubenswrapper[4990]: I1205 01:26:14.157938 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7558d5d6d4-pk6pd" podStartSLOduration=1.952489168 podStartE2EDuration="6.157916961s" podCreationTimestamp="2025-12-05 01:26:08 +0000 UTC" firstStartedPulling="2025-12-05 01:26:09.700302304 +0000 UTC m=+1068.076517655" lastFinishedPulling="2025-12-05 01:26:13.905730087 +0000 UTC m=+1072.281945448" observedRunningTime="2025-12-05 01:26:14.154165194 +0000 UTC m=+1072.530380555" watchObservedRunningTime="2025-12-05 01:26:14.157916961 +0000 UTC m=+1072.534132332" Dec 05 01:26:14 crc kubenswrapper[4990]: I1205 01:26:14.179324 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5fd747f769-nctkb" podStartSLOduration=1.052898176 podStartE2EDuration="5.179305178s" podCreationTimestamp="2025-12-05 01:26:09 +0000 UTC" firstStartedPulling="2025-12-05 01:26:09.792161699 +0000 UTC m=+1068.168377050" lastFinishedPulling="2025-12-05 01:26:13.918568681 +0000 UTC m=+1072.294784052" observedRunningTime="2025-12-05 01:26:14.174930633 +0000 UTC m=+1072.551145994" watchObservedRunningTime="2025-12-05 01:26:14.179305178 +0000 UTC m=+1072.555520549" Dec 05 01:26:29 crc kubenswrapper[4990]: I1205 01:26:29.614560 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5fd747f769-nctkb" Dec 05 01:26:49 crc kubenswrapper[4990]: I1205 01:26:49.256934 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7558d5d6d4-pk6pd" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.039949 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-kscb8"] Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.041097 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kscb8" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.044663 4990 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.045247 4990 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-t2m2h" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.046827 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-2ptxd"] Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.049629 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-2ptxd" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.051281 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.051564 4990 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.059259 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-kscb8"] Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.115738 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-9bw6h"] Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.116532 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-9bw6h" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.118610 4990 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-zn6t8" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.118763 4990 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.118850 4990 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.118943 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.128287 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c36c2427-0455-4931-afa2-940f33ce9854-reloader\") pod \"frr-k8s-2ptxd\" (UID: \"c36c2427-0455-4931-afa2-940f33ce9854\") " pod="metallb-system/frr-k8s-2ptxd" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.128321 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c36c2427-0455-4931-afa2-940f33ce9854-frr-conf\") pod \"frr-k8s-2ptxd\" (UID: \"c36c2427-0455-4931-afa2-940f33ce9854\") " pod="metallb-system/frr-k8s-2ptxd" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.128809 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn7d7\" (UniqueName: \"kubernetes.io/projected/602b14df-eebd-4d44-bf25-721b9f11fc17-kube-api-access-tn7d7\") pod \"frr-k8s-webhook-server-7fcb986d4-kscb8\" (UID: \"602b14df-eebd-4d44-bf25-721b9f11fc17\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kscb8" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.128857 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c36c2427-0455-4931-afa2-940f33ce9854-frr-sockets\") pod \"frr-k8s-2ptxd\" (UID: \"c36c2427-0455-4931-afa2-940f33ce9854\") " pod="metallb-system/frr-k8s-2ptxd" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.128883 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c36c2427-0455-4931-afa2-940f33ce9854-frr-startup\") pod \"frr-k8s-2ptxd\" (UID: \"c36c2427-0455-4931-afa2-940f33ce9854\") " pod="metallb-system/frr-k8s-2ptxd" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.129550 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c36c2427-0455-4931-afa2-940f33ce9854-metrics-certs\") pod \"frr-k8s-2ptxd\" (UID: \"c36c2427-0455-4931-afa2-940f33ce9854\") " pod="metallb-system/frr-k8s-2ptxd" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.129650 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69rg4\" (UniqueName: \"kubernetes.io/projected/c36c2427-0455-4931-afa2-940f33ce9854-kube-api-access-69rg4\") pod \"frr-k8s-2ptxd\" (UID: \"c36c2427-0455-4931-afa2-940f33ce9854\") " pod="metallb-system/frr-k8s-2ptxd" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.129712 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/602b14df-eebd-4d44-bf25-721b9f11fc17-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-kscb8\" (UID: \"602b14df-eebd-4d44-bf25-721b9f11fc17\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kscb8" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.129743 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c36c2427-0455-4931-afa2-940f33ce9854-metrics\") pod \"frr-k8s-2ptxd\" (UID: \"c36c2427-0455-4931-afa2-940f33ce9854\") " pod="metallb-system/frr-k8s-2ptxd" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.139494 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-877sl"] Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.140870 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-877sl" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.144686 4990 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.154377 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-877sl"] Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.231449 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d0b0ae55-f1c8-4437-83cb-188142db3523-metallb-excludel2\") pod \"speaker-9bw6h\" (UID: \"d0b0ae55-f1c8-4437-83cb-188142db3523\") " pod="metallb-system/speaker-9bw6h" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.231527 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69rg4\" (UniqueName: \"kubernetes.io/projected/c36c2427-0455-4931-afa2-940f33ce9854-kube-api-access-69rg4\") pod \"frr-k8s-2ptxd\" (UID: \"c36c2427-0455-4931-afa2-940f33ce9854\") " pod="metallb-system/frr-k8s-2ptxd" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.231549 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcx9g\" (UniqueName: \"kubernetes.io/projected/d0b0ae55-f1c8-4437-83cb-188142db3523-kube-api-access-zcx9g\") pod \"speaker-9bw6h\" (UID: \"d0b0ae55-f1c8-4437-83cb-188142db3523\") " pod="metallb-system/speaker-9bw6h" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.231690 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/602b14df-eebd-4d44-bf25-721b9f11fc17-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-kscb8\" (UID: \"602b14df-eebd-4d44-bf25-721b9f11fc17\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kscb8" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.231740 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c36c2427-0455-4931-afa2-940f33ce9854-metrics\") pod \"frr-k8s-2ptxd\" (UID: \"c36c2427-0455-4931-afa2-940f33ce9854\") " pod="metallb-system/frr-k8s-2ptxd" Dec 05 01:26:50 crc kubenswrapper[4990]: E1205 01:26:50.231819 4990 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Dec 05 01:26:50 crc kubenswrapper[4990]: E1205 01:26:50.231869 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/602b14df-eebd-4d44-bf25-721b9f11fc17-cert podName:602b14df-eebd-4d44-bf25-721b9f11fc17 nodeName:}" failed. No retries permitted until 2025-12-05 01:26:50.731852104 +0000 UTC m=+1109.108067465 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/602b14df-eebd-4d44-bf25-721b9f11fc17-cert") pod "frr-k8s-webhook-server-7fcb986d4-kscb8" (UID: "602b14df-eebd-4d44-bf25-721b9f11fc17") : secret "frr-k8s-webhook-server-cert" not found Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.231886 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c36c2427-0455-4931-afa2-940f33ce9854-reloader\") pod \"frr-k8s-2ptxd\" (UID: \"c36c2427-0455-4931-afa2-940f33ce9854\") " pod="metallb-system/frr-k8s-2ptxd" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.231904 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c36c2427-0455-4931-afa2-940f33ce9854-frr-conf\") pod \"frr-k8s-2ptxd\" (UID: \"c36c2427-0455-4931-afa2-940f33ce9854\") " pod="metallb-system/frr-k8s-2ptxd" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.231923 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn7d7\" (UniqueName: \"kubernetes.io/projected/602b14df-eebd-4d44-bf25-721b9f11fc17-kube-api-access-tn7d7\") pod \"frr-k8s-webhook-server-7fcb986d4-kscb8\" (UID: \"602b14df-eebd-4d44-bf25-721b9f11fc17\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kscb8" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.231940 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d0b0ae55-f1c8-4437-83cb-188142db3523-memberlist\") pod \"speaker-9bw6h\" (UID: \"d0b0ae55-f1c8-4437-83cb-188142db3523\") " pod="metallb-system/speaker-9bw6h" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.231961 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0b0ae55-f1c8-4437-83cb-188142db3523-metrics-certs\") pod \"speaker-9bw6h\" (UID: \"d0b0ae55-f1c8-4437-83cb-188142db3523\") " pod="metallb-system/speaker-9bw6h" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.231978 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c36c2427-0455-4931-afa2-940f33ce9854-frr-sockets\") pod \"frr-k8s-2ptxd\" (UID: \"c36c2427-0455-4931-afa2-940f33ce9854\") " pod="metallb-system/frr-k8s-2ptxd" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.231998 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c36c2427-0455-4931-afa2-940f33ce9854-frr-startup\") pod \"frr-k8s-2ptxd\" (UID: \"c36c2427-0455-4931-afa2-940f33ce9854\") " pod="metallb-system/frr-k8s-2ptxd" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.232014 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8c02539b-f293-4f43-94ef-aefdd98984bc-cert\") pod \"controller-f8648f98b-877sl\" (UID: \"8c02539b-f293-4f43-94ef-aefdd98984bc\") " pod="metallb-system/controller-f8648f98b-877sl" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.232032 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgszj\" (UniqueName: \"kubernetes.io/projected/8c02539b-f293-4f43-94ef-aefdd98984bc-kube-api-access-vgszj\") pod \"controller-f8648f98b-877sl\" (UID: \"8c02539b-f293-4f43-94ef-aefdd98984bc\") " pod="metallb-system/controller-f8648f98b-877sl" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.232050 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c02539b-f293-4f43-94ef-aefdd98984bc-metrics-certs\") pod \"controller-f8648f98b-877sl\" (UID: \"8c02539b-f293-4f43-94ef-aefdd98984bc\") " pod="metallb-system/controller-f8648f98b-877sl" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.232066 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c36c2427-0455-4931-afa2-940f33ce9854-metrics-certs\") pod \"frr-k8s-2ptxd\" (UID: \"c36c2427-0455-4931-afa2-940f33ce9854\") " pod="metallb-system/frr-k8s-2ptxd" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.232283 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c36c2427-0455-4931-afa2-940f33ce9854-metrics\") pod \"frr-k8s-2ptxd\" (UID: \"c36c2427-0455-4931-afa2-940f33ce9854\") " pod="metallb-system/frr-k8s-2ptxd" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.232366 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c36c2427-0455-4931-afa2-940f33ce9854-reloader\") pod \"frr-k8s-2ptxd\" (UID: \"c36c2427-0455-4931-afa2-940f33ce9854\") " pod="metallb-system/frr-k8s-2ptxd" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.232379 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c36c2427-0455-4931-afa2-940f33ce9854-frr-conf\") pod \"frr-k8s-2ptxd\" (UID: \"c36c2427-0455-4931-afa2-940f33ce9854\") " pod="metallb-system/frr-k8s-2ptxd" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.232649 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c36c2427-0455-4931-afa2-940f33ce9854-frr-sockets\") pod \"frr-k8s-2ptxd\" (UID: \"c36c2427-0455-4931-afa2-940f33ce9854\") " pod="metallb-system/frr-k8s-2ptxd" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.232827 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c36c2427-0455-4931-afa2-940f33ce9854-frr-startup\") pod \"frr-k8s-2ptxd\" (UID: \"c36c2427-0455-4931-afa2-940f33ce9854\") " pod="metallb-system/frr-k8s-2ptxd" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.240874 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c36c2427-0455-4931-afa2-940f33ce9854-metrics-certs\") pod \"frr-k8s-2ptxd\" (UID: \"c36c2427-0455-4931-afa2-940f33ce9854\") " pod="metallb-system/frr-k8s-2ptxd" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.253086 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn7d7\" (UniqueName: \"kubernetes.io/projected/602b14df-eebd-4d44-bf25-721b9f11fc17-kube-api-access-tn7d7\") pod \"frr-k8s-webhook-server-7fcb986d4-kscb8\" (UID: \"602b14df-eebd-4d44-bf25-721b9f11fc17\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kscb8" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.253984 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69rg4\" (UniqueName: \"kubernetes.io/projected/c36c2427-0455-4931-afa2-940f33ce9854-kube-api-access-69rg4\") pod \"frr-k8s-2ptxd\" (UID: \"c36c2427-0455-4931-afa2-940f33ce9854\") " pod="metallb-system/frr-k8s-2ptxd" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.333635 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d0b0ae55-f1c8-4437-83cb-188142db3523-memberlist\") pod \"speaker-9bw6h\" (UID: \"d0b0ae55-f1c8-4437-83cb-188142db3523\") " pod="metallb-system/speaker-9bw6h" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.334321 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0b0ae55-f1c8-4437-83cb-188142db3523-metrics-certs\") pod \"speaker-9bw6h\" (UID: \"d0b0ae55-f1c8-4437-83cb-188142db3523\") " pod="metallb-system/speaker-9bw6h" Dec 05 01:26:50 crc kubenswrapper[4990]: E1205 01:26:50.333797 4990 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.334433 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8c02539b-f293-4f43-94ef-aefdd98984bc-cert\") pod \"controller-f8648f98b-877sl\" (UID: \"8c02539b-f293-4f43-94ef-aefdd98984bc\") " pod="metallb-system/controller-f8648f98b-877sl" Dec 05 01:26:50 crc kubenswrapper[4990]: E1205 01:26:50.334521 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0b0ae55-f1c8-4437-83cb-188142db3523-memberlist podName:d0b0ae55-f1c8-4437-83cb-188142db3523 nodeName:}" failed. No retries permitted until 2025-12-05 01:26:50.834498266 +0000 UTC m=+1109.210713627 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d0b0ae55-f1c8-4437-83cb-188142db3523-memberlist") pod "speaker-9bw6h" (UID: "d0b0ae55-f1c8-4437-83cb-188142db3523") : secret "metallb-memberlist" not found Dec 05 01:26:50 crc kubenswrapper[4990]: E1205 01:26:50.334427 4990 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.334582 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgszj\" (UniqueName: \"kubernetes.io/projected/8c02539b-f293-4f43-94ef-aefdd98984bc-kube-api-access-vgszj\") pod \"controller-f8648f98b-877sl\" (UID: \"8c02539b-f293-4f43-94ef-aefdd98984bc\") " pod="metallb-system/controller-f8648f98b-877sl" Dec 05 01:26:50 crc kubenswrapper[4990]: E1205 01:26:50.334601 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0b0ae55-f1c8-4437-83cb-188142db3523-metrics-certs podName:d0b0ae55-f1c8-4437-83cb-188142db3523 nodeName:}" failed. No retries permitted until 2025-12-05 01:26:50.834577429 +0000 UTC m=+1109.210792870 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0b0ae55-f1c8-4437-83cb-188142db3523-metrics-certs") pod "speaker-9bw6h" (UID: "d0b0ae55-f1c8-4437-83cb-188142db3523") : secret "speaker-certs-secret" not found Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.334622 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c02539b-f293-4f43-94ef-aefdd98984bc-metrics-certs\") pod \"controller-f8648f98b-877sl\" (UID: \"8c02539b-f293-4f43-94ef-aefdd98984bc\") " pod="metallb-system/controller-f8648f98b-877sl" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.334704 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d0b0ae55-f1c8-4437-83cb-188142db3523-metallb-excludel2\") pod \"speaker-9bw6h\" (UID: \"d0b0ae55-f1c8-4437-83cb-188142db3523\") " pod="metallb-system/speaker-9bw6h" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.334743 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcx9g\" (UniqueName: \"kubernetes.io/projected/d0b0ae55-f1c8-4437-83cb-188142db3523-kube-api-access-zcx9g\") pod \"speaker-9bw6h\" (UID: \"d0b0ae55-f1c8-4437-83cb-188142db3523\") " pod="metallb-system/speaker-9bw6h" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.335507 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d0b0ae55-f1c8-4437-83cb-188142db3523-metallb-excludel2\") pod \"speaker-9bw6h\" (UID: \"d0b0ae55-f1c8-4437-83cb-188142db3523\") " pod="metallb-system/speaker-9bw6h" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.340190 4990 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.340642 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c02539b-f293-4f43-94ef-aefdd98984bc-metrics-certs\") pod \"controller-f8648f98b-877sl\" (UID: \"8c02539b-f293-4f43-94ef-aefdd98984bc\") " pod="metallb-system/controller-f8648f98b-877sl" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.348998 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8c02539b-f293-4f43-94ef-aefdd98984bc-cert\") pod \"controller-f8648f98b-877sl\" (UID: \"8c02539b-f293-4f43-94ef-aefdd98984bc\") " pod="metallb-system/controller-f8648f98b-877sl" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.351149 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcx9g\" (UniqueName: \"kubernetes.io/projected/d0b0ae55-f1c8-4437-83cb-188142db3523-kube-api-access-zcx9g\") pod \"speaker-9bw6h\" (UID: \"d0b0ae55-f1c8-4437-83cb-188142db3523\") " pod="metallb-system/speaker-9bw6h" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.351867 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgszj\" (UniqueName: \"kubernetes.io/projected/8c02539b-f293-4f43-94ef-aefdd98984bc-kube-api-access-vgszj\") pod \"controller-f8648f98b-877sl\" (UID: \"8c02539b-f293-4f43-94ef-aefdd98984bc\") " pod="metallb-system/controller-f8648f98b-877sl" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.373101 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-2ptxd" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.464579 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-877sl" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.675063 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-877sl"] Dec 05 01:26:50 crc kubenswrapper[4990]: W1205 01:26:50.682585 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c02539b_f293_4f43_94ef_aefdd98984bc.slice/crio-937e2690aa064b4402fb956ad735d79ada82d56defa22ac5c0bfe46e3b4c5b70 WatchSource:0}: Error finding container 937e2690aa064b4402fb956ad735d79ada82d56defa22ac5c0bfe46e3b4c5b70: Status 404 returned error can't find the container with id 937e2690aa064b4402fb956ad735d79ada82d56defa22ac5c0bfe46e3b4c5b70 Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.740893 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/602b14df-eebd-4d44-bf25-721b9f11fc17-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-kscb8\" (UID: \"602b14df-eebd-4d44-bf25-721b9f11fc17\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kscb8" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.746859 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/602b14df-eebd-4d44-bf25-721b9f11fc17-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-kscb8\" (UID: \"602b14df-eebd-4d44-bf25-721b9f11fc17\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kscb8" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.842388 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d0b0ae55-f1c8-4437-83cb-188142db3523-memberlist\") pod \"speaker-9bw6h\" (UID: \"d0b0ae55-f1c8-4437-83cb-188142db3523\") " pod="metallb-system/speaker-9bw6h" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.842710 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0b0ae55-f1c8-4437-83cb-188142db3523-metrics-certs\") pod \"speaker-9bw6h\" (UID: \"d0b0ae55-f1c8-4437-83cb-188142db3523\") " pod="metallb-system/speaker-9bw6h" Dec 05 01:26:50 crc kubenswrapper[4990]: E1205 01:26:50.842573 4990 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 05 01:26:50 crc kubenswrapper[4990]: E1205 01:26:50.842827 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0b0ae55-f1c8-4437-83cb-188142db3523-memberlist podName:d0b0ae55-f1c8-4437-83cb-188142db3523 nodeName:}" failed. No retries permitted until 2025-12-05 01:26:51.842811789 +0000 UTC m=+1110.219027140 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d0b0ae55-f1c8-4437-83cb-188142db3523-memberlist") pod "speaker-9bw6h" (UID: "d0b0ae55-f1c8-4437-83cb-188142db3523") : secret "metallb-memberlist" not found Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.847061 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0b0ae55-f1c8-4437-83cb-188142db3523-metrics-certs\") pod \"speaker-9bw6h\" (UID: \"d0b0ae55-f1c8-4437-83cb-188142db3523\") " pod="metallb-system/speaker-9bw6h" Dec 05 01:26:50 crc kubenswrapper[4990]: I1205 01:26:50.962038 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kscb8" Dec 05 01:26:51 crc kubenswrapper[4990]: I1205 01:26:51.219183 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-kscb8"] Dec 05 01:26:51 crc kubenswrapper[4990]: W1205 01:26:51.229049 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod602b14df_eebd_4d44_bf25_721b9f11fc17.slice/crio-366275028ccfc44d7bfdf346d7ce882c6b2c7f25dfab6ab21863a838c1b1a4c1 WatchSource:0}: Error finding container 366275028ccfc44d7bfdf346d7ce882c6b2c7f25dfab6ab21863a838c1b1a4c1: Status 404 returned error can't find the container with id 366275028ccfc44d7bfdf346d7ce882c6b2c7f25dfab6ab21863a838c1b1a4c1 Dec 05 01:26:51 crc kubenswrapper[4990]: I1205 01:26:51.368118 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kscb8" event={"ID":"602b14df-eebd-4d44-bf25-721b9f11fc17","Type":"ContainerStarted","Data":"366275028ccfc44d7bfdf346d7ce882c6b2c7f25dfab6ab21863a838c1b1a4c1"} Dec 05 01:26:51 crc kubenswrapper[4990]: I1205 01:26:51.369161 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2ptxd" event={"ID":"c36c2427-0455-4931-afa2-940f33ce9854","Type":"ContainerStarted","Data":"7f48a74f882a533207accef0445333666214e7dd9ba1a320e51d9e9148914d5f"} Dec 05 01:26:51 crc kubenswrapper[4990]: I1205 01:26:51.371028 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-877sl" event={"ID":"8c02539b-f293-4f43-94ef-aefdd98984bc","Type":"ContainerStarted","Data":"999239d665f4479fa0a78aa3abfefea8b501c347988f74c2523c11c956a07553"} Dec 05 01:26:51 crc kubenswrapper[4990]: I1205 01:26:51.371059 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-877sl" event={"ID":"8c02539b-f293-4f43-94ef-aefdd98984bc","Type":"ContainerStarted","Data":"18cf49f36b345e32e79c3188b5e4a71c1b097d92cf849fd3d6badbf8759d5aa4"} Dec 05 01:26:51 crc kubenswrapper[4990]: I1205 01:26:51.371068 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-877sl" event={"ID":"8c02539b-f293-4f43-94ef-aefdd98984bc","Type":"ContainerStarted","Data":"937e2690aa064b4402fb956ad735d79ada82d56defa22ac5c0bfe46e3b4c5b70"} Dec 05 01:26:51 crc kubenswrapper[4990]: I1205 01:26:51.371164 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-877sl" Dec 05 01:26:51 crc kubenswrapper[4990]: I1205 01:26:51.388010 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-877sl" podStartSLOduration=1.387989218 podStartE2EDuration="1.387989218s" podCreationTimestamp="2025-12-05 01:26:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:26:51.383413988 +0000 UTC m=+1109.759629369" watchObservedRunningTime="2025-12-05 01:26:51.387989218 +0000 UTC m=+1109.764204579" Dec 05 01:26:51 crc kubenswrapper[4990]: I1205 01:26:51.855505 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d0b0ae55-f1c8-4437-83cb-188142db3523-memberlist\") pod \"speaker-9bw6h\" (UID: \"d0b0ae55-f1c8-4437-83cb-188142db3523\") " pod="metallb-system/speaker-9bw6h" Dec 05 01:26:51 crc kubenswrapper[4990]: I1205 01:26:51.865143 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d0b0ae55-f1c8-4437-83cb-188142db3523-memberlist\") pod \"speaker-9bw6h\" (UID: \"d0b0ae55-f1c8-4437-83cb-188142db3523\") " pod="metallb-system/speaker-9bw6h" Dec 05 01:26:51 crc kubenswrapper[4990]: I1205 01:26:51.940876 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-9bw6h" Dec 05 01:26:51 crc kubenswrapper[4990]: W1205 01:26:51.960398 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0b0ae55_f1c8_4437_83cb_188142db3523.slice/crio-6af32ad7f729a06fc7804769f9e9b5e88bc81c41ef0c650a0b299d28d52a213e WatchSource:0}: Error finding container 6af32ad7f729a06fc7804769f9e9b5e88bc81c41ef0c650a0b299d28d52a213e: Status 404 returned error can't find the container with id 6af32ad7f729a06fc7804769f9e9b5e88bc81c41ef0c650a0b299d28d52a213e Dec 05 01:26:52 crc kubenswrapper[4990]: I1205 01:26:52.385919 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9bw6h" event={"ID":"d0b0ae55-f1c8-4437-83cb-188142db3523","Type":"ContainerStarted","Data":"95adcf504c3f51d88485e0a80bf430d119ca72744e37fe5e0a9d05e515d39d78"} Dec 05 01:26:52 crc kubenswrapper[4990]: I1205 01:26:52.385961 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9bw6h" event={"ID":"d0b0ae55-f1c8-4437-83cb-188142db3523","Type":"ContainerStarted","Data":"6af32ad7f729a06fc7804769f9e9b5e88bc81c41ef0c650a0b299d28d52a213e"} Dec 05 01:26:53 crc kubenswrapper[4990]: I1205 01:26:53.395096 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9bw6h" event={"ID":"d0b0ae55-f1c8-4437-83cb-188142db3523","Type":"ContainerStarted","Data":"92d62cfe919d6af73c1ff7bdd480761162f25549dc5f4753f1ee97b91843f53e"} Dec 05 01:26:53 crc kubenswrapper[4990]: I1205 01:26:53.395400 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-9bw6h" Dec 05 01:26:53 crc kubenswrapper[4990]: I1205 01:26:53.432957 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-9bw6h" podStartSLOduration=3.432941712 podStartE2EDuration="3.432941712s" podCreationTimestamp="2025-12-05 01:26:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:26:53.429808804 +0000 UTC m=+1111.806024165" watchObservedRunningTime="2025-12-05 01:26:53.432941712 +0000 UTC m=+1111.809157073" Dec 05 01:26:58 crc kubenswrapper[4990]: I1205 01:26:58.432945 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kscb8" event={"ID":"602b14df-eebd-4d44-bf25-721b9f11fc17","Type":"ContainerStarted","Data":"8df8bf68d41434b7a6d32b66566bc1116f42f83223ec288f0e09332b909b66d3"} Dec 05 01:26:58 crc kubenswrapper[4990]: I1205 01:26:58.433820 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kscb8" Dec 05 01:26:58 crc kubenswrapper[4990]: I1205 01:26:58.435477 4990 generic.go:334] "Generic (PLEG): container finished" podID="c36c2427-0455-4931-afa2-940f33ce9854" containerID="98162bf241a97c91d218490c9098612ce00a9e31feb8c166e8d866ac525f8e5c" exitCode=0 Dec 05 01:26:58 crc kubenswrapper[4990]: I1205 01:26:58.435561 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2ptxd" event={"ID":"c36c2427-0455-4931-afa2-940f33ce9854","Type":"ContainerDied","Data":"98162bf241a97c91d218490c9098612ce00a9e31feb8c166e8d866ac525f8e5c"} Dec 05 01:26:58 crc kubenswrapper[4990]: I1205 01:26:58.468192 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kscb8" podStartSLOduration=2.2604114109999998 podStartE2EDuration="8.468162873s" podCreationTimestamp="2025-12-05 01:26:50 +0000 UTC" firstStartedPulling="2025-12-05 01:26:51.232614699 +0000 UTC m=+1109.608830060" lastFinishedPulling="2025-12-05 01:26:57.440366161 +0000 UTC m=+1115.816581522" observedRunningTime="2025-12-05 01:26:58.456547313 +0000 UTC m=+1116.832762744" watchObservedRunningTime="2025-12-05 01:26:58.468162873 +0000 UTC m=+1116.844378274" Dec 05 01:26:59 crc kubenswrapper[4990]: I1205 01:26:59.442675 4990 generic.go:334] "Generic (PLEG): container finished" podID="c36c2427-0455-4931-afa2-940f33ce9854" containerID="d4212c8cfcf511f93f02bccd595f3a08cf800f64fcd0eb70be29d845666837a1" exitCode=0 Dec 05 01:26:59 crc kubenswrapper[4990]: I1205 01:26:59.442787 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2ptxd" event={"ID":"c36c2427-0455-4931-afa2-940f33ce9854","Type":"ContainerDied","Data":"d4212c8cfcf511f93f02bccd595f3a08cf800f64fcd0eb70be29d845666837a1"} Dec 05 01:27:00 crc kubenswrapper[4990]: I1205 01:27:00.453174 4990 generic.go:334] "Generic (PLEG): container finished" podID="c36c2427-0455-4931-afa2-940f33ce9854" containerID="f9cae9e5a90131d61a4746e562bbfc9f1b8dd0be1d62d14e7fe8d14f2e56f6d6" exitCode=0 Dec 05 01:27:00 crc kubenswrapper[4990]: I1205 01:27:00.453227 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2ptxd" event={"ID":"c36c2427-0455-4931-afa2-940f33ce9854","Type":"ContainerDied","Data":"f9cae9e5a90131d61a4746e562bbfc9f1b8dd0be1d62d14e7fe8d14f2e56f6d6"} Dec 05 01:27:01 crc kubenswrapper[4990]: I1205 01:27:01.466286 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2ptxd" event={"ID":"c36c2427-0455-4931-afa2-940f33ce9854","Type":"ContainerStarted","Data":"7566387892f265c23450e280690b91691ed0f43442250ecf366962b7e328379c"} Dec 05 01:27:01 crc kubenswrapper[4990]: I1205 01:27:01.466624 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2ptxd" event={"ID":"c36c2427-0455-4931-afa2-940f33ce9854","Type":"ContainerStarted","Data":"22287af09e5bb669502993caa33b06e2a6ad67c0acb0086188a74e0cdf99eec2"} Dec 05 01:27:01 crc kubenswrapper[4990]: I1205 01:27:01.466634 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2ptxd" event={"ID":"c36c2427-0455-4931-afa2-940f33ce9854","Type":"ContainerStarted","Data":"1875a66af1ed94331e1079fa08002a39a4a82db0ab559e527839f63482e00550"} Dec 05 01:27:01 crc kubenswrapper[4990]: I1205 01:27:01.466643 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2ptxd" event={"ID":"c36c2427-0455-4931-afa2-940f33ce9854","Type":"ContainerStarted","Data":"df6a130173e4017efb96b3a562414d6c753ec318101c273f461bce8056a76c2a"} Dec 05 01:27:01 crc kubenswrapper[4990]: I1205 01:27:01.466654 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2ptxd" event={"ID":"c36c2427-0455-4931-afa2-940f33ce9854","Type":"ContainerStarted","Data":"62e10d4f07e06b9a8d95a1cb03a39be692a2079dc51350f67aeb705cce3534f0"} Dec 05 01:27:02 crc kubenswrapper[4990]: I1205 01:27:02.477125 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2ptxd" event={"ID":"c36c2427-0455-4931-afa2-940f33ce9854","Type":"ContainerStarted","Data":"edfba171d83ad1f77d2ca2e853e6bf1a3ac07960af722a09edc75649458f0977"} Dec 05 01:27:02 crc kubenswrapper[4990]: I1205 01:27:02.477614 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-2ptxd" Dec 05 01:27:02 crc kubenswrapper[4990]: I1205 01:27:02.505753 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-2ptxd" podStartSLOduration=6.082048937 podStartE2EDuration="12.505733546s" podCreationTimestamp="2025-12-05 01:26:50 +0000 UTC" firstStartedPulling="2025-12-05 01:26:51.045553081 +0000 UTC m=+1109.421768442" lastFinishedPulling="2025-12-05 01:26:57.46923768 +0000 UTC m=+1115.845453051" observedRunningTime="2025-12-05 01:27:02.502643139 +0000 UTC m=+1120.878858540" watchObservedRunningTime="2025-12-05 01:27:02.505733546 +0000 UTC m=+1120.881948907" Dec 05 01:27:05 crc kubenswrapper[4990]: I1205 01:27:05.373748 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-2ptxd" Dec 05 01:27:05 crc kubenswrapper[4990]: I1205 01:27:05.410440 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-2ptxd" Dec 05 01:27:10 crc kubenswrapper[4990]: I1205 01:27:10.376743 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-2ptxd" Dec 05 01:27:10 crc kubenswrapper[4990]: I1205 01:27:10.470758 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-877sl" Dec 05 01:27:10 crc kubenswrapper[4990]: I1205 01:27:10.969048 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kscb8" Dec 05 01:27:11 crc kubenswrapper[4990]: I1205 01:27:11.944779 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-9bw6h" Dec 05 01:27:13 crc kubenswrapper[4990]: I1205 01:27:13.686303 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv"] Dec 05 01:27:13 crc kubenswrapper[4990]: I1205 01:27:13.688051 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv" Dec 05 01:27:13 crc kubenswrapper[4990]: I1205 01:27:13.692342 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 05 01:27:13 crc kubenswrapper[4990]: I1205 01:27:13.706105 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv"] Dec 05 01:27:13 crc kubenswrapper[4990]: I1205 01:27:13.766105 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f475647f-cf4a-47da-844d-a2952b514ea0-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv\" (UID: \"f475647f-cf4a-47da-844d-a2952b514ea0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv" Dec 05 01:27:13 crc kubenswrapper[4990]: I1205 01:27:13.766175 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4wh8\" (UniqueName: \"kubernetes.io/projected/f475647f-cf4a-47da-844d-a2952b514ea0-kube-api-access-p4wh8\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv\" (UID: \"f475647f-cf4a-47da-844d-a2952b514ea0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv" Dec 05 01:27:13 crc kubenswrapper[4990]: I1205 01:27:13.766224 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f475647f-cf4a-47da-844d-a2952b514ea0-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv\" (UID: \"f475647f-cf4a-47da-844d-a2952b514ea0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv" Dec 05 01:27:13 crc kubenswrapper[4990]: I1205 01:27:13.867844 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f475647f-cf4a-47da-844d-a2952b514ea0-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv\" (UID: \"f475647f-cf4a-47da-844d-a2952b514ea0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv" Dec 05 01:27:13 crc kubenswrapper[4990]: I1205 01:27:13.867905 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4wh8\" (UniqueName: \"kubernetes.io/projected/f475647f-cf4a-47da-844d-a2952b514ea0-kube-api-access-p4wh8\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv\" (UID: \"f475647f-cf4a-47da-844d-a2952b514ea0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv" Dec 05 01:27:13 crc kubenswrapper[4990]: I1205 01:27:13.867957 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f475647f-cf4a-47da-844d-a2952b514ea0-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv\" (UID: \"f475647f-cf4a-47da-844d-a2952b514ea0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv" Dec 05 01:27:13 crc kubenswrapper[4990]: I1205 01:27:13.868504 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f475647f-cf4a-47da-844d-a2952b514ea0-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv\" (UID: \"f475647f-cf4a-47da-844d-a2952b514ea0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv" Dec 05 01:27:13 crc kubenswrapper[4990]: I1205 01:27:13.868546 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f475647f-cf4a-47da-844d-a2952b514ea0-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv\" (UID: \"f475647f-cf4a-47da-844d-a2952b514ea0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv" Dec 05 01:27:13 crc kubenswrapper[4990]: I1205 01:27:13.901620 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4wh8\" (UniqueName: \"kubernetes.io/projected/f475647f-cf4a-47da-844d-a2952b514ea0-kube-api-access-p4wh8\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv\" (UID: \"f475647f-cf4a-47da-844d-a2952b514ea0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv" Dec 05 01:27:14 crc kubenswrapper[4990]: I1205 01:27:14.013685 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv" Dec 05 01:27:14 crc kubenswrapper[4990]: I1205 01:27:14.475987 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv"] Dec 05 01:27:14 crc kubenswrapper[4990]: W1205 01:27:14.481899 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf475647f_cf4a_47da_844d_a2952b514ea0.slice/crio-20489dae9cc65305d6d75151737fc926cf1d6ac77a6a7cca4a96cab225b62f1c WatchSource:0}: Error finding container 20489dae9cc65305d6d75151737fc926cf1d6ac77a6a7cca4a96cab225b62f1c: Status 404 returned error can't find the container with id 20489dae9cc65305d6d75151737fc926cf1d6ac77a6a7cca4a96cab225b62f1c Dec 05 01:27:14 crc kubenswrapper[4990]: I1205 01:27:14.563609 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv" event={"ID":"f475647f-cf4a-47da-844d-a2952b514ea0","Type":"ContainerStarted","Data":"20489dae9cc65305d6d75151737fc926cf1d6ac77a6a7cca4a96cab225b62f1c"} Dec 05 01:27:15 crc kubenswrapper[4990]: I1205 01:27:15.574119 4990 generic.go:334] "Generic (PLEG): container finished" podID="f475647f-cf4a-47da-844d-a2952b514ea0" containerID="6acc92d937edccfbff80305ef4b9a2aa37bad9bb86bdfffbddcde54661aac9c7" exitCode=0 Dec 05 01:27:15 crc kubenswrapper[4990]: I1205 01:27:15.574219 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv" event={"ID":"f475647f-cf4a-47da-844d-a2952b514ea0","Type":"ContainerDied","Data":"6acc92d937edccfbff80305ef4b9a2aa37bad9bb86bdfffbddcde54661aac9c7"} Dec 05 01:27:19 crc kubenswrapper[4990]: I1205 01:27:19.611212 4990 generic.go:334] "Generic (PLEG): container finished" podID="f475647f-cf4a-47da-844d-a2952b514ea0" containerID="32e832b4a96f77e10f7633f134e0b44fa54eb69fc8ac56e453e96921c47080b2" exitCode=0 Dec 05 01:27:19 crc kubenswrapper[4990]: I1205 01:27:19.611331 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv" event={"ID":"f475647f-cf4a-47da-844d-a2952b514ea0","Type":"ContainerDied","Data":"32e832b4a96f77e10f7633f134e0b44fa54eb69fc8ac56e453e96921c47080b2"} Dec 05 01:27:20 crc kubenswrapper[4990]: I1205 01:27:20.621431 4990 generic.go:334] "Generic (PLEG): container finished" podID="f475647f-cf4a-47da-844d-a2952b514ea0" containerID="0bdf7070a2cb803817dc7a214ffab7267a41cca937749d2933ad2c884314eb58" exitCode=0 Dec 05 01:27:20 crc kubenswrapper[4990]: I1205 01:27:20.621530 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv" event={"ID":"f475647f-cf4a-47da-844d-a2952b514ea0","Type":"ContainerDied","Data":"0bdf7070a2cb803817dc7a214ffab7267a41cca937749d2933ad2c884314eb58"} Dec 05 01:27:22 crc kubenswrapper[4990]: I1205 01:27:22.046531 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv" Dec 05 01:27:22 crc kubenswrapper[4990]: I1205 01:27:22.199219 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f475647f-cf4a-47da-844d-a2952b514ea0-bundle\") pod \"f475647f-cf4a-47da-844d-a2952b514ea0\" (UID: \"f475647f-cf4a-47da-844d-a2952b514ea0\") " Dec 05 01:27:22 crc kubenswrapper[4990]: I1205 01:27:22.200065 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4wh8\" (UniqueName: \"kubernetes.io/projected/f475647f-cf4a-47da-844d-a2952b514ea0-kube-api-access-p4wh8\") pod \"f475647f-cf4a-47da-844d-a2952b514ea0\" (UID: \"f475647f-cf4a-47da-844d-a2952b514ea0\") " Dec 05 01:27:22 crc kubenswrapper[4990]: I1205 01:27:22.200121 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f475647f-cf4a-47da-844d-a2952b514ea0-util\") pod \"f475647f-cf4a-47da-844d-a2952b514ea0\" (UID: \"f475647f-cf4a-47da-844d-a2952b514ea0\") " Dec 05 01:27:22 crc kubenswrapper[4990]: I1205 01:27:22.201524 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f475647f-cf4a-47da-844d-a2952b514ea0-bundle" (OuterVolumeSpecName: "bundle") pod "f475647f-cf4a-47da-844d-a2952b514ea0" (UID: "f475647f-cf4a-47da-844d-a2952b514ea0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:27:22 crc kubenswrapper[4990]: I1205 01:27:22.210137 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f475647f-cf4a-47da-844d-a2952b514ea0-kube-api-access-p4wh8" (OuterVolumeSpecName: "kube-api-access-p4wh8") pod "f475647f-cf4a-47da-844d-a2952b514ea0" (UID: "f475647f-cf4a-47da-844d-a2952b514ea0"). InnerVolumeSpecName "kube-api-access-p4wh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:27:22 crc kubenswrapper[4990]: I1205 01:27:22.223558 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f475647f-cf4a-47da-844d-a2952b514ea0-util" (OuterVolumeSpecName: "util") pod "f475647f-cf4a-47da-844d-a2952b514ea0" (UID: "f475647f-cf4a-47da-844d-a2952b514ea0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:27:22 crc kubenswrapper[4990]: I1205 01:27:22.301370 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4wh8\" (UniqueName: \"kubernetes.io/projected/f475647f-cf4a-47da-844d-a2952b514ea0-kube-api-access-p4wh8\") on node \"crc\" DevicePath \"\"" Dec 05 01:27:22 crc kubenswrapper[4990]: I1205 01:27:22.301423 4990 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f475647f-cf4a-47da-844d-a2952b514ea0-util\") on node \"crc\" DevicePath \"\"" Dec 05 01:27:22 crc kubenswrapper[4990]: I1205 01:27:22.301440 4990 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f475647f-cf4a-47da-844d-a2952b514ea0-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:27:22 crc kubenswrapper[4990]: I1205 01:27:22.642117 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv" event={"ID":"f475647f-cf4a-47da-844d-a2952b514ea0","Type":"ContainerDied","Data":"20489dae9cc65305d6d75151737fc926cf1d6ac77a6a7cca4a96cab225b62f1c"} Dec 05 01:27:22 crc kubenswrapper[4990]: I1205 01:27:22.642606 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20489dae9cc65305d6d75151737fc926cf1d6ac77a6a7cca4a96cab225b62f1c" Dec 05 01:27:22 crc kubenswrapper[4990]: I1205 01:27:22.642668 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv" Dec 05 01:27:27 crc kubenswrapper[4990]: I1205 01:27:27.502708 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ftq8g"] Dec 05 01:27:27 crc kubenswrapper[4990]: E1205 01:27:27.503300 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f475647f-cf4a-47da-844d-a2952b514ea0" containerName="util" Dec 05 01:27:27 crc kubenswrapper[4990]: I1205 01:27:27.503311 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="f475647f-cf4a-47da-844d-a2952b514ea0" containerName="util" Dec 05 01:27:27 crc kubenswrapper[4990]: E1205 01:27:27.503325 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f475647f-cf4a-47da-844d-a2952b514ea0" containerName="extract" Dec 05 01:27:27 crc kubenswrapper[4990]: I1205 01:27:27.503331 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="f475647f-cf4a-47da-844d-a2952b514ea0" containerName="extract" Dec 05 01:27:27 crc kubenswrapper[4990]: E1205 01:27:27.503344 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f475647f-cf4a-47da-844d-a2952b514ea0" containerName="pull" Dec 05 01:27:27 crc kubenswrapper[4990]: I1205 01:27:27.503350 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="f475647f-cf4a-47da-844d-a2952b514ea0" containerName="pull" Dec 05 01:27:27 crc kubenswrapper[4990]: I1205 01:27:27.503462 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="f475647f-cf4a-47da-844d-a2952b514ea0" containerName="extract" Dec 05 01:27:27 crc kubenswrapper[4990]: I1205 01:27:27.503816 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ftq8g" Dec 05 01:27:27 crc kubenswrapper[4990]: I1205 01:27:27.505688 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Dec 05 01:27:27 crc kubenswrapper[4990]: I1205 01:27:27.505938 4990 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-65hck" Dec 05 01:27:27 crc kubenswrapper[4990]: I1205 01:27:27.510963 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Dec 05 01:27:27 crc kubenswrapper[4990]: I1205 01:27:27.525877 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ftq8g"] Dec 05 01:27:27 crc kubenswrapper[4990]: I1205 01:27:27.571001 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3989bb3c-b66d-460f-837b-5feada6877b5-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-ftq8g\" (UID: \"3989bb3c-b66d-460f-837b-5feada6877b5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ftq8g" Dec 05 01:27:27 crc kubenswrapper[4990]: I1205 01:27:27.571257 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlwjk\" (UniqueName: \"kubernetes.io/projected/3989bb3c-b66d-460f-837b-5feada6877b5-kube-api-access-xlwjk\") pod \"cert-manager-operator-controller-manager-64cf6dff88-ftq8g\" (UID: \"3989bb3c-b66d-460f-837b-5feada6877b5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ftq8g" Dec 05 01:27:27 crc kubenswrapper[4990]: I1205 01:27:27.672869 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlwjk\" (UniqueName: \"kubernetes.io/projected/3989bb3c-b66d-460f-837b-5feada6877b5-kube-api-access-xlwjk\") pod \"cert-manager-operator-controller-manager-64cf6dff88-ftq8g\" (UID: \"3989bb3c-b66d-460f-837b-5feada6877b5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ftq8g" Dec 05 01:27:27 crc kubenswrapper[4990]: I1205 01:27:27.672934 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3989bb3c-b66d-460f-837b-5feada6877b5-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-ftq8g\" (UID: \"3989bb3c-b66d-460f-837b-5feada6877b5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ftq8g" Dec 05 01:27:27 crc kubenswrapper[4990]: I1205 01:27:27.673347 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3989bb3c-b66d-460f-837b-5feada6877b5-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-ftq8g\" (UID: \"3989bb3c-b66d-460f-837b-5feada6877b5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ftq8g" Dec 05 01:27:27 crc kubenswrapper[4990]: I1205 01:27:27.698374 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlwjk\" (UniqueName: \"kubernetes.io/projected/3989bb3c-b66d-460f-837b-5feada6877b5-kube-api-access-xlwjk\") pod \"cert-manager-operator-controller-manager-64cf6dff88-ftq8g\" (UID: \"3989bb3c-b66d-460f-837b-5feada6877b5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ftq8g" Dec 05 01:27:27 crc kubenswrapper[4990]: I1205 01:27:27.817867 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ftq8g" Dec 05 01:27:28 crc kubenswrapper[4990]: I1205 01:27:28.239079 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ftq8g"] Dec 05 01:27:28 crc kubenswrapper[4990]: W1205 01:27:28.241534 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3989bb3c_b66d_460f_837b_5feada6877b5.slice/crio-f538cc7573cd8b0d66591d0691ec202079f40ad7d528605a752171bf2075b1c6 WatchSource:0}: Error finding container f538cc7573cd8b0d66591d0691ec202079f40ad7d528605a752171bf2075b1c6: Status 404 returned error can't find the container with id f538cc7573cd8b0d66591d0691ec202079f40ad7d528605a752171bf2075b1c6 Dec 05 01:27:28 crc kubenswrapper[4990]: I1205 01:27:28.682210 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ftq8g" event={"ID":"3989bb3c-b66d-460f-837b-5feada6877b5","Type":"ContainerStarted","Data":"f538cc7573cd8b0d66591d0691ec202079f40ad7d528605a752171bf2075b1c6"} Dec 05 01:27:35 crc kubenswrapper[4990]: I1205 01:27:35.734717 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ftq8g" event={"ID":"3989bb3c-b66d-460f-837b-5feada6877b5","Type":"ContainerStarted","Data":"771fe9ddfdb935a837e6018fcd706b4c89f917e9b2a28f74899351b60da97091"} Dec 05 01:27:35 crc kubenswrapper[4990]: I1205 01:27:35.775427 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ftq8g" podStartSLOduration=2.183798959 podStartE2EDuration="8.775402581s" podCreationTimestamp="2025-12-05 01:27:27 +0000 UTC" firstStartedPulling="2025-12-05 01:27:28.24495742 +0000 UTC m=+1146.621172821" lastFinishedPulling="2025-12-05 01:27:34.836561082 +0000 UTC m=+1153.212776443" observedRunningTime="2025-12-05 01:27:35.769225656 +0000 UTC m=+1154.145441057" watchObservedRunningTime="2025-12-05 01:27:35.775402581 +0000 UTC m=+1154.151617972" Dec 05 01:27:38 crc kubenswrapper[4990]: I1205 01:27:38.455602 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-86w72"] Dec 05 01:27:38 crc kubenswrapper[4990]: I1205 01:27:38.456625 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-86w72" Dec 05 01:27:38 crc kubenswrapper[4990]: I1205 01:27:38.458505 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 05 01:27:38 crc kubenswrapper[4990]: I1205 01:27:38.458881 4990 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-8phrc" Dec 05 01:27:38 crc kubenswrapper[4990]: I1205 01:27:38.459121 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 05 01:27:38 crc kubenswrapper[4990]: I1205 01:27:38.463783 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-86w72"] Dec 05 01:27:38 crc kubenswrapper[4990]: I1205 01:27:38.618816 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/92a9406b-f802-4b93-b392-5121fb343101-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-86w72\" (UID: \"92a9406b-f802-4b93-b392-5121fb343101\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-86w72" Dec 05 01:27:38 crc kubenswrapper[4990]: I1205 01:27:38.619140 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x44c8\" (UniqueName: \"kubernetes.io/projected/92a9406b-f802-4b93-b392-5121fb343101-kube-api-access-x44c8\") pod \"cert-manager-webhook-f4fb5df64-86w72\" (UID: \"92a9406b-f802-4b93-b392-5121fb343101\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-86w72" Dec 05 01:27:38 crc kubenswrapper[4990]: I1205 01:27:38.720665 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/92a9406b-f802-4b93-b392-5121fb343101-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-86w72\" (UID: \"92a9406b-f802-4b93-b392-5121fb343101\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-86w72" Dec 05 01:27:38 crc kubenswrapper[4990]: I1205 01:27:38.720738 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x44c8\" (UniqueName: \"kubernetes.io/projected/92a9406b-f802-4b93-b392-5121fb343101-kube-api-access-x44c8\") pod \"cert-manager-webhook-f4fb5df64-86w72\" (UID: \"92a9406b-f802-4b93-b392-5121fb343101\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-86w72" Dec 05 01:27:38 crc kubenswrapper[4990]: I1205 01:27:38.741141 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/92a9406b-f802-4b93-b392-5121fb343101-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-86w72\" (UID: \"92a9406b-f802-4b93-b392-5121fb343101\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-86w72" Dec 05 01:27:38 crc kubenswrapper[4990]: I1205 01:27:38.744427 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x44c8\" (UniqueName: \"kubernetes.io/projected/92a9406b-f802-4b93-b392-5121fb343101-kube-api-access-x44c8\") pod \"cert-manager-webhook-f4fb5df64-86w72\" (UID: \"92a9406b-f802-4b93-b392-5121fb343101\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-86w72" Dec 05 01:27:38 crc kubenswrapper[4990]: I1205 01:27:38.774032 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-86w72" Dec 05 01:27:39 crc kubenswrapper[4990]: I1205 01:27:39.379080 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-86w72"] Dec 05 01:27:39 crc kubenswrapper[4990]: I1205 01:27:39.757622 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-86w72" event={"ID":"92a9406b-f802-4b93-b392-5121fb343101","Type":"ContainerStarted","Data":"90d80f5f50f452dfd60e17a803367e229b64fbbed80783d870f62c39f8dd7b73"} Dec 05 01:27:40 crc kubenswrapper[4990]: I1205 01:27:40.861637 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-7rd9j"] Dec 05 01:27:40 crc kubenswrapper[4990]: I1205 01:27:40.862624 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-7rd9j" Dec 05 01:27:40 crc kubenswrapper[4990]: I1205 01:27:40.865790 4990 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-w8qsq" Dec 05 01:27:40 crc kubenswrapper[4990]: I1205 01:27:40.866800 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-7rd9j"] Dec 05 01:27:41 crc kubenswrapper[4990]: I1205 01:27:41.052236 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hctl\" (UniqueName: \"kubernetes.io/projected/11e7cf47-261e-4451-b1c0-b325a8236fae-kube-api-access-6hctl\") pod \"cert-manager-cainjector-855d9ccff4-7rd9j\" (UID: \"11e7cf47-261e-4451-b1c0-b325a8236fae\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-7rd9j" Dec 05 01:27:41 crc kubenswrapper[4990]: I1205 01:27:41.052306 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/11e7cf47-261e-4451-b1c0-b325a8236fae-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-7rd9j\" (UID: \"11e7cf47-261e-4451-b1c0-b325a8236fae\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-7rd9j" Dec 05 01:27:41 crc kubenswrapper[4990]: I1205 01:27:41.153289 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hctl\" (UniqueName: \"kubernetes.io/projected/11e7cf47-261e-4451-b1c0-b325a8236fae-kube-api-access-6hctl\") pod \"cert-manager-cainjector-855d9ccff4-7rd9j\" (UID: \"11e7cf47-261e-4451-b1c0-b325a8236fae\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-7rd9j" Dec 05 01:27:41 crc kubenswrapper[4990]: I1205 01:27:41.153394 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/11e7cf47-261e-4451-b1c0-b325a8236fae-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-7rd9j\" (UID: \"11e7cf47-261e-4451-b1c0-b325a8236fae\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-7rd9j" Dec 05 01:27:41 crc kubenswrapper[4990]: I1205 01:27:41.175198 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/11e7cf47-261e-4451-b1c0-b325a8236fae-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-7rd9j\" (UID: \"11e7cf47-261e-4451-b1c0-b325a8236fae\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-7rd9j" Dec 05 01:27:41 crc kubenswrapper[4990]: I1205 01:27:41.175320 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hctl\" (UniqueName: \"kubernetes.io/projected/11e7cf47-261e-4451-b1c0-b325a8236fae-kube-api-access-6hctl\") pod \"cert-manager-cainjector-855d9ccff4-7rd9j\" (UID: \"11e7cf47-261e-4451-b1c0-b325a8236fae\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-7rd9j" Dec 05 01:27:41 crc kubenswrapper[4990]: I1205 01:27:41.191135 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-7rd9j" Dec 05 01:27:41 crc kubenswrapper[4990]: I1205 01:27:41.415330 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-7rd9j"] Dec 05 01:27:41 crc kubenswrapper[4990]: W1205 01:27:41.420634 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11e7cf47_261e_4451_b1c0_b325a8236fae.slice/crio-5b720d56545552326aa72b7694f2614c85a9c81a470b30c93cb1d37660458d40 WatchSource:0}: Error finding container 5b720d56545552326aa72b7694f2614c85a9c81a470b30c93cb1d37660458d40: Status 404 returned error can't find the container with id 5b720d56545552326aa72b7694f2614c85a9c81a470b30c93cb1d37660458d40 Dec 05 01:27:41 crc kubenswrapper[4990]: I1205 01:27:41.771607 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-7rd9j" event={"ID":"11e7cf47-261e-4451-b1c0-b325a8236fae","Type":"ContainerStarted","Data":"5b720d56545552326aa72b7694f2614c85a9c81a470b30c93cb1d37660458d40"} Dec 05 01:27:48 crc kubenswrapper[4990]: I1205 01:27:48.835155 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-7rd9j" event={"ID":"11e7cf47-261e-4451-b1c0-b325a8236fae","Type":"ContainerStarted","Data":"d8ce518f1ba7c67930427c016e11c7861580dde09ecf2257ea56f6cc297cc0d7"} Dec 05 01:27:48 crc kubenswrapper[4990]: I1205 01:27:48.843292 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-86w72" event={"ID":"92a9406b-f802-4b93-b392-5121fb343101","Type":"ContainerStarted","Data":"2df77f9702c0d10de534a51c5cfb52c381330f942022a60cb27453d8563c0aff"} Dec 05 01:27:48 crc kubenswrapper[4990]: I1205 01:27:48.843626 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-86w72" Dec 05 01:27:48 crc kubenswrapper[4990]: I1205 01:27:48.867668 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-7rd9j" podStartSLOduration=1.9084855219999999 podStartE2EDuration="8.867633294s" podCreationTimestamp="2025-12-05 01:27:40 +0000 UTC" firstStartedPulling="2025-12-05 01:27:41.423349097 +0000 UTC m=+1159.799564478" lastFinishedPulling="2025-12-05 01:27:48.382496899 +0000 UTC m=+1166.758712250" observedRunningTime="2025-12-05 01:27:48.862005964 +0000 UTC m=+1167.238221325" watchObservedRunningTime="2025-12-05 01:27:48.867633294 +0000 UTC m=+1167.243848655" Dec 05 01:27:48 crc kubenswrapper[4990]: I1205 01:27:48.885125 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-86w72" podStartSLOduration=1.92136485 podStartE2EDuration="10.88510185s" podCreationTimestamp="2025-12-05 01:27:38 +0000 UTC" firstStartedPulling="2025-12-05 01:27:39.393735999 +0000 UTC m=+1157.769951360" lastFinishedPulling="2025-12-05 01:27:48.357472979 +0000 UTC m=+1166.733688360" observedRunningTime="2025-12-05 01:27:48.881889539 +0000 UTC m=+1167.258104900" watchObservedRunningTime="2025-12-05 01:27:48.88510185 +0000 UTC m=+1167.261317211" Dec 05 01:27:53 crc kubenswrapper[4990]: I1205 01:27:53.779567 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-86w72" Dec 05 01:27:57 crc kubenswrapper[4990]: I1205 01:27:57.441556 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-j6bkz"] Dec 05 01:27:57 crc kubenswrapper[4990]: I1205 01:27:57.443722 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-j6bkz" Dec 05 01:27:57 crc kubenswrapper[4990]: I1205 01:27:57.447321 4990 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-9f8m2" Dec 05 01:27:57 crc kubenswrapper[4990]: I1205 01:27:57.453160 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-j6bkz"] Dec 05 01:27:57 crc kubenswrapper[4990]: I1205 01:27:57.598231 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/81603236-0e38-4834-8f5f-031321e5862c-bound-sa-token\") pod \"cert-manager-86cb77c54b-j6bkz\" (UID: \"81603236-0e38-4834-8f5f-031321e5862c\") " pod="cert-manager/cert-manager-86cb77c54b-j6bkz" Dec 05 01:27:57 crc kubenswrapper[4990]: I1205 01:27:57.598309 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jqln\" (UniqueName: \"kubernetes.io/projected/81603236-0e38-4834-8f5f-031321e5862c-kube-api-access-4jqln\") pod \"cert-manager-86cb77c54b-j6bkz\" (UID: \"81603236-0e38-4834-8f5f-031321e5862c\") " pod="cert-manager/cert-manager-86cb77c54b-j6bkz" Dec 05 01:27:57 crc kubenswrapper[4990]: I1205 01:27:57.699502 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/81603236-0e38-4834-8f5f-031321e5862c-bound-sa-token\") pod \"cert-manager-86cb77c54b-j6bkz\" (UID: \"81603236-0e38-4834-8f5f-031321e5862c\") " pod="cert-manager/cert-manager-86cb77c54b-j6bkz" Dec 05 01:27:57 crc kubenswrapper[4990]: I1205 01:27:57.699595 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jqln\" (UniqueName: \"kubernetes.io/projected/81603236-0e38-4834-8f5f-031321e5862c-kube-api-access-4jqln\") pod \"cert-manager-86cb77c54b-j6bkz\" (UID: \"81603236-0e38-4834-8f5f-031321e5862c\") " pod="cert-manager/cert-manager-86cb77c54b-j6bkz" Dec 05 01:27:57 crc kubenswrapper[4990]: I1205 01:27:57.727679 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jqln\" (UniqueName: \"kubernetes.io/projected/81603236-0e38-4834-8f5f-031321e5862c-kube-api-access-4jqln\") pod \"cert-manager-86cb77c54b-j6bkz\" (UID: \"81603236-0e38-4834-8f5f-031321e5862c\") " pod="cert-manager/cert-manager-86cb77c54b-j6bkz" Dec 05 01:27:57 crc kubenswrapper[4990]: I1205 01:27:57.728411 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/81603236-0e38-4834-8f5f-031321e5862c-bound-sa-token\") pod \"cert-manager-86cb77c54b-j6bkz\" (UID: \"81603236-0e38-4834-8f5f-031321e5862c\") " pod="cert-manager/cert-manager-86cb77c54b-j6bkz" Dec 05 01:27:57 crc kubenswrapper[4990]: I1205 01:27:57.779306 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-j6bkz" Dec 05 01:27:58 crc kubenswrapper[4990]: I1205 01:27:58.245235 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-j6bkz"] Dec 05 01:27:58 crc kubenswrapper[4990]: W1205 01:27:58.256278 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81603236_0e38_4834_8f5f_031321e5862c.slice/crio-392e61280231951cad086cd86e5ef5c62df703e2e15c893facb38f223fb03d3f WatchSource:0}: Error finding container 392e61280231951cad086cd86e5ef5c62df703e2e15c893facb38f223fb03d3f: Status 404 returned error can't find the container with id 392e61280231951cad086cd86e5ef5c62df703e2e15c893facb38f223fb03d3f Dec 05 01:27:58 crc kubenswrapper[4990]: I1205 01:27:58.926503 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-j6bkz" event={"ID":"81603236-0e38-4834-8f5f-031321e5862c","Type":"ContainerStarted","Data":"4884f10c9cc10806c80f42724bedfbfdffd209871c82aad207816d9766bf0330"} Dec 05 01:27:58 crc kubenswrapper[4990]: I1205 01:27:58.926786 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-j6bkz" event={"ID":"81603236-0e38-4834-8f5f-031321e5862c","Type":"ContainerStarted","Data":"392e61280231951cad086cd86e5ef5c62df703e2e15c893facb38f223fb03d3f"} Dec 05 01:27:58 crc kubenswrapper[4990]: I1205 01:27:58.942258 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-j6bkz" podStartSLOduration=1.942241104 podStartE2EDuration="1.942241104s" podCreationTimestamp="2025-12-05 01:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:27:58.940885386 +0000 UTC m=+1177.317100747" watchObservedRunningTime="2025-12-05 01:27:58.942241104 +0000 UTC m=+1177.318456465" Dec 05 01:28:07 crc kubenswrapper[4990]: I1205 01:28:07.435701 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-6v7hg"] Dec 05 01:28:07 crc kubenswrapper[4990]: I1205 01:28:07.437980 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6v7hg" Dec 05 01:28:07 crc kubenswrapper[4990]: I1205 01:28:07.442868 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-wl65h" Dec 05 01:28:07 crc kubenswrapper[4990]: I1205 01:28:07.443968 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 05 01:28:07 crc kubenswrapper[4990]: I1205 01:28:07.451699 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 05 01:28:07 crc kubenswrapper[4990]: I1205 01:28:07.456661 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6v7hg"] Dec 05 01:28:07 crc kubenswrapper[4990]: I1205 01:28:07.542914 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74vzr\" (UniqueName: \"kubernetes.io/projected/a503ec49-3d2d-4de6-a1c8-6784b3b3a17e-kube-api-access-74vzr\") pod \"openstack-operator-index-6v7hg\" (UID: \"a503ec49-3d2d-4de6-a1c8-6784b3b3a17e\") " pod="openstack-operators/openstack-operator-index-6v7hg" Dec 05 01:28:07 crc kubenswrapper[4990]: I1205 01:28:07.644239 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74vzr\" (UniqueName: \"kubernetes.io/projected/a503ec49-3d2d-4de6-a1c8-6784b3b3a17e-kube-api-access-74vzr\") pod \"openstack-operator-index-6v7hg\" (UID: \"a503ec49-3d2d-4de6-a1c8-6784b3b3a17e\") " pod="openstack-operators/openstack-operator-index-6v7hg" Dec 05 01:28:07 crc kubenswrapper[4990]: I1205 01:28:07.665103 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74vzr\" (UniqueName: \"kubernetes.io/projected/a503ec49-3d2d-4de6-a1c8-6784b3b3a17e-kube-api-access-74vzr\") pod \"openstack-operator-index-6v7hg\" (UID: \"a503ec49-3d2d-4de6-a1c8-6784b3b3a17e\") " pod="openstack-operators/openstack-operator-index-6v7hg" Dec 05 01:28:07 crc kubenswrapper[4990]: I1205 01:28:07.774277 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6v7hg" Dec 05 01:28:07 crc kubenswrapper[4990]: I1205 01:28:07.995912 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6v7hg"] Dec 05 01:28:09 crc kubenswrapper[4990]: I1205 01:28:09.000817 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6v7hg" event={"ID":"a503ec49-3d2d-4de6-a1c8-6784b3b3a17e","Type":"ContainerStarted","Data":"becd24c2020997b4409513834f9ac2b21fa3c5122396cecdb0238917c78de4fb"} Dec 05 01:28:10 crc kubenswrapper[4990]: I1205 01:28:10.211803 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-6v7hg"] Dec 05 01:28:10 crc kubenswrapper[4990]: I1205 01:28:10.823902 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-2rn2k"] Dec 05 01:28:10 crc kubenswrapper[4990]: I1205 01:28:10.825375 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2rn2k" Dec 05 01:28:10 crc kubenswrapper[4990]: I1205 01:28:10.839612 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2rn2k"] Dec 05 01:28:10 crc kubenswrapper[4990]: I1205 01:28:10.989941 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdx8f\" (UniqueName: \"kubernetes.io/projected/a781f496-a9fd-4e90-8030-853e55d8c7d9-kube-api-access-tdx8f\") pod \"openstack-operator-index-2rn2k\" (UID: \"a781f496-a9fd-4e90-8030-853e55d8c7d9\") " pod="openstack-operators/openstack-operator-index-2rn2k" Dec 05 01:28:11 crc kubenswrapper[4990]: I1205 01:28:11.019324 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6v7hg" event={"ID":"a503ec49-3d2d-4de6-a1c8-6784b3b3a17e","Type":"ContainerStarted","Data":"a0873fd650c12d1d2b94ba987a966f0f2690d1a3788a8b64977efa79197abfea"} Dec 05 01:28:11 crc kubenswrapper[4990]: I1205 01:28:11.019789 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-6v7hg" podUID="a503ec49-3d2d-4de6-a1c8-6784b3b3a17e" containerName="registry-server" containerID="cri-o://a0873fd650c12d1d2b94ba987a966f0f2690d1a3788a8b64977efa79197abfea" gracePeriod=2 Dec 05 01:28:11 crc kubenswrapper[4990]: I1205 01:28:11.043342 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-6v7hg" podStartSLOduration=1.712868279 podStartE2EDuration="4.043317224s" podCreationTimestamp="2025-12-05 01:28:07 +0000 UTC" firstStartedPulling="2025-12-05 01:28:07.996436211 +0000 UTC m=+1186.372651572" lastFinishedPulling="2025-12-05 01:28:10.326885146 +0000 UTC m=+1188.703100517" observedRunningTime="2025-12-05 01:28:11.03965338 +0000 UTC m=+1189.415868771" watchObservedRunningTime="2025-12-05 01:28:11.043317224 +0000 UTC m=+1189.419532615" Dec 05 01:28:11 crc kubenswrapper[4990]: I1205 01:28:11.091985 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdx8f\" (UniqueName: \"kubernetes.io/projected/a781f496-a9fd-4e90-8030-853e55d8c7d9-kube-api-access-tdx8f\") pod \"openstack-operator-index-2rn2k\" (UID: \"a781f496-a9fd-4e90-8030-853e55d8c7d9\") " pod="openstack-operators/openstack-operator-index-2rn2k" Dec 05 01:28:11 crc kubenswrapper[4990]: I1205 01:28:11.125608 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdx8f\" (UniqueName: \"kubernetes.io/projected/a781f496-a9fd-4e90-8030-853e55d8c7d9-kube-api-access-tdx8f\") pod \"openstack-operator-index-2rn2k\" (UID: \"a781f496-a9fd-4e90-8030-853e55d8c7d9\") " pod="openstack-operators/openstack-operator-index-2rn2k" Dec 05 01:28:11 crc kubenswrapper[4990]: I1205 01:28:11.161661 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2rn2k" Dec 05 01:28:11 crc kubenswrapper[4990]: I1205 01:28:11.430080 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6v7hg" Dec 05 01:28:11 crc kubenswrapper[4990]: I1205 01:28:11.599390 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74vzr\" (UniqueName: \"kubernetes.io/projected/a503ec49-3d2d-4de6-a1c8-6784b3b3a17e-kube-api-access-74vzr\") pod \"a503ec49-3d2d-4de6-a1c8-6784b3b3a17e\" (UID: \"a503ec49-3d2d-4de6-a1c8-6784b3b3a17e\") " Dec 05 01:28:11 crc kubenswrapper[4990]: I1205 01:28:11.606324 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a503ec49-3d2d-4de6-a1c8-6784b3b3a17e-kube-api-access-74vzr" (OuterVolumeSpecName: "kube-api-access-74vzr") pod "a503ec49-3d2d-4de6-a1c8-6784b3b3a17e" (UID: "a503ec49-3d2d-4de6-a1c8-6784b3b3a17e"). InnerVolumeSpecName "kube-api-access-74vzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:28:11 crc kubenswrapper[4990]: I1205 01:28:11.618796 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2rn2k"] Dec 05 01:28:11 crc kubenswrapper[4990]: I1205 01:28:11.702435 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74vzr\" (UniqueName: \"kubernetes.io/projected/a503ec49-3d2d-4de6-a1c8-6784b3b3a17e-kube-api-access-74vzr\") on node \"crc\" DevicePath \"\"" Dec 05 01:28:12 crc kubenswrapper[4990]: I1205 01:28:12.036019 4990 generic.go:334] "Generic (PLEG): container finished" podID="a503ec49-3d2d-4de6-a1c8-6784b3b3a17e" containerID="a0873fd650c12d1d2b94ba987a966f0f2690d1a3788a8b64977efa79197abfea" exitCode=0 Dec 05 01:28:12 crc kubenswrapper[4990]: I1205 01:28:12.036125 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6v7hg" Dec 05 01:28:12 crc kubenswrapper[4990]: I1205 01:28:12.036156 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6v7hg" event={"ID":"a503ec49-3d2d-4de6-a1c8-6784b3b3a17e","Type":"ContainerDied","Data":"a0873fd650c12d1d2b94ba987a966f0f2690d1a3788a8b64977efa79197abfea"} Dec 05 01:28:12 crc kubenswrapper[4990]: I1205 01:28:12.036844 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6v7hg" event={"ID":"a503ec49-3d2d-4de6-a1c8-6784b3b3a17e","Type":"ContainerDied","Data":"becd24c2020997b4409513834f9ac2b21fa3c5122396cecdb0238917c78de4fb"} Dec 05 01:28:12 crc kubenswrapper[4990]: I1205 01:28:12.036914 4990 scope.go:117] "RemoveContainer" containerID="a0873fd650c12d1d2b94ba987a966f0f2690d1a3788a8b64977efa79197abfea" Dec 05 01:28:12 crc kubenswrapper[4990]: I1205 01:28:12.039980 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2rn2k" event={"ID":"a781f496-a9fd-4e90-8030-853e55d8c7d9","Type":"ContainerStarted","Data":"1a73e52ad907e5dfaa6dc4ebe32a0eded4bc656c54760ae5d690f3eb83c3fa6c"} Dec 05 01:28:12 crc kubenswrapper[4990]: I1205 01:28:12.040165 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2rn2k" event={"ID":"a781f496-a9fd-4e90-8030-853e55d8c7d9","Type":"ContainerStarted","Data":"0bf11c679122c574f583f85330c0adb1fbbc1263b2573f441bf570beff96bae9"} Dec 05 01:28:12 crc kubenswrapper[4990]: I1205 01:28:12.068634 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-2rn2k" podStartSLOduration=2.007471482 podStartE2EDuration="2.068605237s" podCreationTimestamp="2025-12-05 01:28:10 +0000 UTC" firstStartedPulling="2025-12-05 01:28:11.629652641 +0000 UTC m=+1190.005868012" lastFinishedPulling="2025-12-05 01:28:11.690786366 +0000 UTC m=+1190.067001767" observedRunningTime="2025-12-05 01:28:12.06519476 +0000 UTC m=+1190.441410161" watchObservedRunningTime="2025-12-05 01:28:12.068605237 +0000 UTC m=+1190.444820638" Dec 05 01:28:12 crc kubenswrapper[4990]: I1205 01:28:12.080310 4990 scope.go:117] "RemoveContainer" containerID="a0873fd650c12d1d2b94ba987a966f0f2690d1a3788a8b64977efa79197abfea" Dec 05 01:28:12 crc kubenswrapper[4990]: E1205 01:28:12.081082 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0873fd650c12d1d2b94ba987a966f0f2690d1a3788a8b64977efa79197abfea\": container with ID starting with a0873fd650c12d1d2b94ba987a966f0f2690d1a3788a8b64977efa79197abfea not found: ID does not exist" containerID="a0873fd650c12d1d2b94ba987a966f0f2690d1a3788a8b64977efa79197abfea" Dec 05 01:28:12 crc kubenswrapper[4990]: I1205 01:28:12.081160 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0873fd650c12d1d2b94ba987a966f0f2690d1a3788a8b64977efa79197abfea"} err="failed to get container status \"a0873fd650c12d1d2b94ba987a966f0f2690d1a3788a8b64977efa79197abfea\": rpc error: code = NotFound desc = could not find container \"a0873fd650c12d1d2b94ba987a966f0f2690d1a3788a8b64977efa79197abfea\": container with ID starting with a0873fd650c12d1d2b94ba987a966f0f2690d1a3788a8b64977efa79197abfea not found: ID does not exist" Dec 05 01:28:12 crc kubenswrapper[4990]: I1205 01:28:12.089854 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-6v7hg"] Dec 05 01:28:12 crc kubenswrapper[4990]: I1205 01:28:12.094987 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-6v7hg"] Dec 05 01:28:13 crc kubenswrapper[4990]: I1205 01:28:13.943977 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a503ec49-3d2d-4de6-a1c8-6784b3b3a17e" path="/var/lib/kubelet/pods/a503ec49-3d2d-4de6-a1c8-6784b3b3a17e/volumes" Dec 05 01:28:21 crc kubenswrapper[4990]: I1205 01:28:21.161869 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-2rn2k" Dec 05 01:28:21 crc kubenswrapper[4990]: I1205 01:28:21.162457 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-2rn2k" Dec 05 01:28:21 crc kubenswrapper[4990]: I1205 01:28:21.205349 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-2rn2k" Dec 05 01:28:21 crc kubenswrapper[4990]: I1205 01:28:21.823955 4990 patch_prober.go:28] interesting pod/machine-config-daemon-zxlh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 01:28:21 crc kubenswrapper[4990]: I1205 01:28:21.824036 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 01:28:22 crc kubenswrapper[4990]: I1205 01:28:22.163665 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-2rn2k" Dec 05 01:28:29 crc kubenswrapper[4990]: I1205 01:28:29.796136 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk"] Dec 05 01:28:29 crc kubenswrapper[4990]: E1205 01:28:29.796878 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a503ec49-3d2d-4de6-a1c8-6784b3b3a17e" containerName="registry-server" Dec 05 01:28:29 crc kubenswrapper[4990]: I1205 01:28:29.796888 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="a503ec49-3d2d-4de6-a1c8-6784b3b3a17e" containerName="registry-server" Dec 05 01:28:29 crc kubenswrapper[4990]: I1205 01:28:29.796978 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="a503ec49-3d2d-4de6-a1c8-6784b3b3a17e" containerName="registry-server" Dec 05 01:28:29 crc kubenswrapper[4990]: I1205 01:28:29.797755 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk" Dec 05 01:28:29 crc kubenswrapper[4990]: I1205 01:28:29.800000 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-pw28t" Dec 05 01:28:29 crc kubenswrapper[4990]: I1205 01:28:29.809794 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk"] Dec 05 01:28:29 crc kubenswrapper[4990]: I1205 01:28:29.986186 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/25369b8e-20b7-4826-821e-f4db1d2e533f-bundle\") pod \"58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk\" (UID: \"25369b8e-20b7-4826-821e-f4db1d2e533f\") " pod="openstack-operators/58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk" Dec 05 01:28:29 crc kubenswrapper[4990]: I1205 01:28:29.986538 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhb8m\" (UniqueName: \"kubernetes.io/projected/25369b8e-20b7-4826-821e-f4db1d2e533f-kube-api-access-fhb8m\") pod \"58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk\" (UID: \"25369b8e-20b7-4826-821e-f4db1d2e533f\") " pod="openstack-operators/58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk" Dec 05 01:28:29 crc kubenswrapper[4990]: I1205 01:28:29.986633 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/25369b8e-20b7-4826-821e-f4db1d2e533f-util\") pod \"58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk\" (UID: \"25369b8e-20b7-4826-821e-f4db1d2e533f\") " pod="openstack-operators/58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk" Dec 05 01:28:30 crc kubenswrapper[4990]: I1205 01:28:30.087834 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/25369b8e-20b7-4826-821e-f4db1d2e533f-util\") pod \"58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk\" (UID: \"25369b8e-20b7-4826-821e-f4db1d2e533f\") " pod="openstack-operators/58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk" Dec 05 01:28:30 crc kubenswrapper[4990]: I1205 01:28:30.087940 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/25369b8e-20b7-4826-821e-f4db1d2e533f-bundle\") pod \"58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk\" (UID: \"25369b8e-20b7-4826-821e-f4db1d2e533f\") " pod="openstack-operators/58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk" Dec 05 01:28:30 crc kubenswrapper[4990]: I1205 01:28:30.087986 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhb8m\" (UniqueName: \"kubernetes.io/projected/25369b8e-20b7-4826-821e-f4db1d2e533f-kube-api-access-fhb8m\") pod \"58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk\" (UID: \"25369b8e-20b7-4826-821e-f4db1d2e533f\") " pod="openstack-operators/58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk" Dec 05 01:28:30 crc kubenswrapper[4990]: I1205 01:28:30.088701 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/25369b8e-20b7-4826-821e-f4db1d2e533f-util\") pod \"58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk\" (UID: \"25369b8e-20b7-4826-821e-f4db1d2e533f\") " pod="openstack-operators/58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk" Dec 05 01:28:30 crc kubenswrapper[4990]: I1205 01:28:30.088709 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/25369b8e-20b7-4826-821e-f4db1d2e533f-bundle\") pod \"58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk\" (UID: \"25369b8e-20b7-4826-821e-f4db1d2e533f\") " pod="openstack-operators/58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk" Dec 05 01:28:30 crc kubenswrapper[4990]: I1205 01:28:30.119298 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhb8m\" (UniqueName: \"kubernetes.io/projected/25369b8e-20b7-4826-821e-f4db1d2e533f-kube-api-access-fhb8m\") pod \"58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk\" (UID: \"25369b8e-20b7-4826-821e-f4db1d2e533f\") " pod="openstack-operators/58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk" Dec 05 01:28:30 crc kubenswrapper[4990]: I1205 01:28:30.413947 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk" Dec 05 01:28:30 crc kubenswrapper[4990]: I1205 01:28:30.719795 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk"] Dec 05 01:28:31 crc kubenswrapper[4990]: I1205 01:28:31.195987 4990 generic.go:334] "Generic (PLEG): container finished" podID="25369b8e-20b7-4826-821e-f4db1d2e533f" containerID="c821d6ba5cec6bf00c2f2b7e922b3770376b49daf54f29aaa5913451f5a36d6e" exitCode=0 Dec 05 01:28:31 crc kubenswrapper[4990]: I1205 01:28:31.196051 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk" event={"ID":"25369b8e-20b7-4826-821e-f4db1d2e533f","Type":"ContainerDied","Data":"c821d6ba5cec6bf00c2f2b7e922b3770376b49daf54f29aaa5913451f5a36d6e"} Dec 05 01:28:31 crc kubenswrapper[4990]: I1205 01:28:31.196093 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk" event={"ID":"25369b8e-20b7-4826-821e-f4db1d2e533f","Type":"ContainerStarted","Data":"e2629280749f33124a10936af53c623d6525be39aebf3397e9ca06295e170e17"} Dec 05 01:28:31 crc kubenswrapper[4990]: I1205 01:28:31.199686 4990 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 01:28:32 crc kubenswrapper[4990]: I1205 01:28:32.207579 4990 generic.go:334] "Generic (PLEG): container finished" podID="25369b8e-20b7-4826-821e-f4db1d2e533f" containerID="75a31a1e1a0864fa12af72c83195a1965b690fb2a8d88d5e6e23c730d6da435e" exitCode=0 Dec 05 01:28:32 crc kubenswrapper[4990]: I1205 01:28:32.207640 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk" event={"ID":"25369b8e-20b7-4826-821e-f4db1d2e533f","Type":"ContainerDied","Data":"75a31a1e1a0864fa12af72c83195a1965b690fb2a8d88d5e6e23c730d6da435e"} Dec 05 01:28:33 crc kubenswrapper[4990]: I1205 01:28:33.220960 4990 generic.go:334] "Generic (PLEG): container finished" podID="25369b8e-20b7-4826-821e-f4db1d2e533f" containerID="0be72ca37015e64b79163fa1ccb3310426156fd44cf1ec8dd14363d5841fe9ae" exitCode=0 Dec 05 01:28:33 crc kubenswrapper[4990]: I1205 01:28:33.221008 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk" event={"ID":"25369b8e-20b7-4826-821e-f4db1d2e533f","Type":"ContainerDied","Data":"0be72ca37015e64b79163fa1ccb3310426156fd44cf1ec8dd14363d5841fe9ae"} Dec 05 01:28:34 crc kubenswrapper[4990]: I1205 01:28:34.522983 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk" Dec 05 01:28:34 crc kubenswrapper[4990]: I1205 01:28:34.656054 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/25369b8e-20b7-4826-821e-f4db1d2e533f-util\") pod \"25369b8e-20b7-4826-821e-f4db1d2e533f\" (UID: \"25369b8e-20b7-4826-821e-f4db1d2e533f\") " Dec 05 01:28:34 crc kubenswrapper[4990]: I1205 01:28:34.656098 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/25369b8e-20b7-4826-821e-f4db1d2e533f-bundle\") pod \"25369b8e-20b7-4826-821e-f4db1d2e533f\" (UID: \"25369b8e-20b7-4826-821e-f4db1d2e533f\") " Dec 05 01:28:34 crc kubenswrapper[4990]: I1205 01:28:34.656195 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhb8m\" (UniqueName: \"kubernetes.io/projected/25369b8e-20b7-4826-821e-f4db1d2e533f-kube-api-access-fhb8m\") pod \"25369b8e-20b7-4826-821e-f4db1d2e533f\" (UID: \"25369b8e-20b7-4826-821e-f4db1d2e533f\") " Dec 05 01:28:34 crc kubenswrapper[4990]: I1205 01:28:34.656892 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25369b8e-20b7-4826-821e-f4db1d2e533f-bundle" (OuterVolumeSpecName: "bundle") pod "25369b8e-20b7-4826-821e-f4db1d2e533f" (UID: "25369b8e-20b7-4826-821e-f4db1d2e533f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:28:34 crc kubenswrapper[4990]: I1205 01:28:34.663687 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25369b8e-20b7-4826-821e-f4db1d2e533f-kube-api-access-fhb8m" (OuterVolumeSpecName: "kube-api-access-fhb8m") pod "25369b8e-20b7-4826-821e-f4db1d2e533f" (UID: "25369b8e-20b7-4826-821e-f4db1d2e533f"). InnerVolumeSpecName "kube-api-access-fhb8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:28:34 crc kubenswrapper[4990]: I1205 01:28:34.669533 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25369b8e-20b7-4826-821e-f4db1d2e533f-util" (OuterVolumeSpecName: "util") pod "25369b8e-20b7-4826-821e-f4db1d2e533f" (UID: "25369b8e-20b7-4826-821e-f4db1d2e533f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:28:34 crc kubenswrapper[4990]: I1205 01:28:34.757592 4990 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/25369b8e-20b7-4826-821e-f4db1d2e533f-util\") on node \"crc\" DevicePath \"\"" Dec 05 01:28:34 crc kubenswrapper[4990]: I1205 01:28:34.757620 4990 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/25369b8e-20b7-4826-821e-f4db1d2e533f-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:28:34 crc kubenswrapper[4990]: I1205 01:28:34.757631 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhb8m\" (UniqueName: \"kubernetes.io/projected/25369b8e-20b7-4826-821e-f4db1d2e533f-kube-api-access-fhb8m\") on node \"crc\" DevicePath \"\"" Dec 05 01:28:35 crc kubenswrapper[4990]: I1205 01:28:35.247048 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk" event={"ID":"25369b8e-20b7-4826-821e-f4db1d2e533f","Type":"ContainerDied","Data":"e2629280749f33124a10936af53c623d6525be39aebf3397e9ca06295e170e17"} Dec 05 01:28:35 crc kubenswrapper[4990]: I1205 01:28:35.247104 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2629280749f33124a10936af53c623d6525be39aebf3397e9ca06295e170e17" Dec 05 01:28:35 crc kubenswrapper[4990]: I1205 01:28:35.247250 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk" Dec 05 01:28:42 crc kubenswrapper[4990]: I1205 01:28:42.026232 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6f79d9dccc-fzjsf"] Dec 05 01:28:42 crc kubenswrapper[4990]: E1205 01:28:42.027832 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25369b8e-20b7-4826-821e-f4db1d2e533f" containerName="pull" Dec 05 01:28:42 crc kubenswrapper[4990]: I1205 01:28:42.027859 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="25369b8e-20b7-4826-821e-f4db1d2e533f" containerName="pull" Dec 05 01:28:42 crc kubenswrapper[4990]: E1205 01:28:42.027885 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25369b8e-20b7-4826-821e-f4db1d2e533f" containerName="util" Dec 05 01:28:42 crc kubenswrapper[4990]: I1205 01:28:42.027896 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="25369b8e-20b7-4826-821e-f4db1d2e533f" containerName="util" Dec 05 01:28:42 crc kubenswrapper[4990]: E1205 01:28:42.027931 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25369b8e-20b7-4826-821e-f4db1d2e533f" containerName="extract" Dec 05 01:28:42 crc kubenswrapper[4990]: I1205 01:28:42.027945 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="25369b8e-20b7-4826-821e-f4db1d2e533f" containerName="extract" Dec 05 01:28:42 crc kubenswrapper[4990]: I1205 01:28:42.028160 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="25369b8e-20b7-4826-821e-f4db1d2e533f" containerName="extract" Dec 05 01:28:42 crc kubenswrapper[4990]: I1205 01:28:42.028887 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-6f79d9dccc-fzjsf" Dec 05 01:28:42 crc kubenswrapper[4990]: I1205 01:28:42.031785 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-sfz87" Dec 05 01:28:42 crc kubenswrapper[4990]: I1205 01:28:42.060568 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6f79d9dccc-fzjsf"] Dec 05 01:28:42 crc kubenswrapper[4990]: I1205 01:28:42.161766 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krpzq\" (UniqueName: \"kubernetes.io/projected/14a886d8-c123-4618-84d2-ba3b0e29ac4b-kube-api-access-krpzq\") pod \"openstack-operator-controller-operator-6f79d9dccc-fzjsf\" (UID: \"14a886d8-c123-4618-84d2-ba3b0e29ac4b\") " pod="openstack-operators/openstack-operator-controller-operator-6f79d9dccc-fzjsf" Dec 05 01:28:42 crc kubenswrapper[4990]: I1205 01:28:42.263120 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krpzq\" (UniqueName: \"kubernetes.io/projected/14a886d8-c123-4618-84d2-ba3b0e29ac4b-kube-api-access-krpzq\") pod \"openstack-operator-controller-operator-6f79d9dccc-fzjsf\" (UID: \"14a886d8-c123-4618-84d2-ba3b0e29ac4b\") " pod="openstack-operators/openstack-operator-controller-operator-6f79d9dccc-fzjsf" Dec 05 01:28:42 crc kubenswrapper[4990]: I1205 01:28:42.288560 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krpzq\" (UniqueName: \"kubernetes.io/projected/14a886d8-c123-4618-84d2-ba3b0e29ac4b-kube-api-access-krpzq\") pod \"openstack-operator-controller-operator-6f79d9dccc-fzjsf\" (UID: \"14a886d8-c123-4618-84d2-ba3b0e29ac4b\") " pod="openstack-operators/openstack-operator-controller-operator-6f79d9dccc-fzjsf" Dec 05 01:28:42 crc kubenswrapper[4990]: I1205 01:28:42.350425 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-6f79d9dccc-fzjsf" Dec 05 01:28:42 crc kubenswrapper[4990]: I1205 01:28:42.807061 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6f79d9dccc-fzjsf"] Dec 05 01:28:42 crc kubenswrapper[4990]: W1205 01:28:42.814358 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14a886d8_c123_4618_84d2_ba3b0e29ac4b.slice/crio-3f538ecb8c08d997d79c40d72d541cd10115e5d012b735d4cab8278c0e7f762c WatchSource:0}: Error finding container 3f538ecb8c08d997d79c40d72d541cd10115e5d012b735d4cab8278c0e7f762c: Status 404 returned error can't find the container with id 3f538ecb8c08d997d79c40d72d541cd10115e5d012b735d4cab8278c0e7f762c Dec 05 01:28:43 crc kubenswrapper[4990]: I1205 01:28:43.310223 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6f79d9dccc-fzjsf" event={"ID":"14a886d8-c123-4618-84d2-ba3b0e29ac4b","Type":"ContainerStarted","Data":"3f538ecb8c08d997d79c40d72d541cd10115e5d012b735d4cab8278c0e7f762c"} Dec 05 01:28:47 crc kubenswrapper[4990]: I1205 01:28:47.339968 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6f79d9dccc-fzjsf" event={"ID":"14a886d8-c123-4618-84d2-ba3b0e29ac4b","Type":"ContainerStarted","Data":"c0bc0a19e7ccd7e33faf23b8c1dd3b2e95b7438f2311bab62eea16a9f8650a9c"} Dec 05 01:28:47 crc kubenswrapper[4990]: I1205 01:28:47.340912 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-6f79d9dccc-fzjsf" Dec 05 01:28:47 crc kubenswrapper[4990]: I1205 01:28:47.377933 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-6f79d9dccc-fzjsf" podStartSLOduration=1.769282831 podStartE2EDuration="5.377904314s" podCreationTimestamp="2025-12-05 01:28:42 +0000 UTC" firstStartedPulling="2025-12-05 01:28:42.815996832 +0000 UTC m=+1221.192212193" lastFinishedPulling="2025-12-05 01:28:46.424618315 +0000 UTC m=+1224.800833676" observedRunningTime="2025-12-05 01:28:47.372093669 +0000 UTC m=+1225.748309030" watchObservedRunningTime="2025-12-05 01:28:47.377904314 +0000 UTC m=+1225.754119685" Dec 05 01:28:51 crc kubenswrapper[4990]: I1205 01:28:51.824641 4990 patch_prober.go:28] interesting pod/machine-config-daemon-zxlh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 01:28:51 crc kubenswrapper[4990]: I1205 01:28:51.825209 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 01:28:52 crc kubenswrapper[4990]: I1205 01:28:52.354839 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-6f79d9dccc-fzjsf" Dec 05 01:29:21 crc kubenswrapper[4990]: I1205 01:29:21.824673 4990 patch_prober.go:28] interesting pod/machine-config-daemon-zxlh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 01:29:21 crc kubenswrapper[4990]: I1205 01:29:21.825903 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 01:29:21 crc kubenswrapper[4990]: I1205 01:29:21.826054 4990 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" Dec 05 01:29:21 crc kubenswrapper[4990]: I1205 01:29:21.827107 4990 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ff6ba92961791b172f695a12e8eb19f33bc6e8ba78d861452310d9615b6fa761"} pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 01:29:21 crc kubenswrapper[4990]: I1205 01:29:21.827216 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" containerID="cri-o://ff6ba92961791b172f695a12e8eb19f33bc6e8ba78d861452310d9615b6fa761" gracePeriod=600 Dec 05 01:29:22 crc kubenswrapper[4990]: I1205 01:29:22.620012 4990 generic.go:334] "Generic (PLEG): container finished" podID="b6580a04-67de-48f9-9da2-56cb4377af48" containerID="ff6ba92961791b172f695a12e8eb19f33bc6e8ba78d861452310d9615b6fa761" exitCode=0 Dec 05 01:29:22 crc kubenswrapper[4990]: I1205 01:29:22.620100 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" event={"ID":"b6580a04-67de-48f9-9da2-56cb4377af48","Type":"ContainerDied","Data":"ff6ba92961791b172f695a12e8eb19f33bc6e8ba78d861452310d9615b6fa761"} Dec 05 01:29:22 crc kubenswrapper[4990]: I1205 01:29:22.620517 4990 scope.go:117] "RemoveContainer" containerID="6f2b4a96536639cbb9d3bc8aec6f26003832337aeb02bfe5ac6cc1d82eae2a27" Dec 05 01:29:23 crc kubenswrapper[4990]: I1205 01:29:23.632342 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" event={"ID":"b6580a04-67de-48f9-9da2-56cb4377af48","Type":"ContainerStarted","Data":"5555ce4abbfedb686ddef6d7dce409f40c947a09fec383b5821b1209ff394208"} Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.083894 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-t42g2"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.086770 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-t42g2" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.088977 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-pbs2t" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.089948 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-6fgdz"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.091188 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-6fgdz" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.093182 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-68qgw" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.100121 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-t42g2"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.108252 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-t29m4"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.109152 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-t29m4" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.114258 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-p2nng" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.117903 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-6fgdz"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.127126 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-wgzhs"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.127927 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-wgzhs" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.131201 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-kxmnz" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.142994 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-t29m4"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.181400 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-wgzhs"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.183856 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-h8nnl"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.185413 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvblx\" (UniqueName: \"kubernetes.io/projected/a95995a7-92e3-40c0-8fad-30e47ea759e1-kube-api-access-wvblx\") pod \"cinder-operator-controller-manager-859b6ccc6-t42g2\" (UID: \"a95995a7-92e3-40c0-8fad-30e47ea759e1\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-t42g2" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.185465 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzj57\" (UniqueName: \"kubernetes.io/projected/e908e515-9470-4a27-912f-a266a4ffe3a9-kube-api-access-gzj57\") pod \"barbican-operator-controller-manager-7d9dfd778-6fgdz\" (UID: \"e908e515-9470-4a27-912f-a266a4ffe3a9\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-6fgdz" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.185611 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-h8nnl" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.195815 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-zrl8z" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.197986 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-h8nnl"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.207325 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-fqcx2"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.208188 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-fqcx2" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.209821 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-skl4p" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.212106 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-fqcx2"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.218271 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-b47qg"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.219014 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-b47qg" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.227099 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.227254 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-vsx5p" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.240481 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-b47qg"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.262682 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-p8zs5"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.263610 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-p8zs5" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.268815 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-j9q78" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.279902 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-lr2g9"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.281168 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-lr2g9" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.295022 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-plxvd" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.296885 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glkrj\" (UniqueName: \"kubernetes.io/projected/f54f5881-49fa-4cfa-88d9-20d0b0d9c082-kube-api-access-glkrj\") pod \"heat-operator-controller-manager-5f64f6f8bb-h8nnl\" (UID: \"f54f5881-49fa-4cfa-88d9-20d0b0d9c082\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-h8nnl" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.296924 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzj57\" (UniqueName: \"kubernetes.io/projected/e908e515-9470-4a27-912f-a266a4ffe3a9-kube-api-access-gzj57\") pod \"barbican-operator-controller-manager-7d9dfd778-6fgdz\" (UID: \"e908e515-9470-4a27-912f-a266a4ffe3a9\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-6fgdz" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.296956 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxj2c\" (UniqueName: \"kubernetes.io/projected/c19a83c4-e130-47a2-81d2-04dbea61d6c1-kube-api-access-bxj2c\") pod \"glance-operator-controller-manager-77987cd8cd-wgzhs\" (UID: \"c19a83c4-e130-47a2-81d2-04dbea61d6c1\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-wgzhs" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.296982 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxnxd\" (UniqueName: \"kubernetes.io/projected/63a6f5c3-f437-478f-b72c-afcae7a4dba8-kube-api-access-mxnxd\") pod \"designate-operator-controller-manager-78b4bc895b-t29m4\" (UID: \"63a6f5c3-f437-478f-b72c-afcae7a4dba8\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-t29m4" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.297045 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvblx\" (UniqueName: \"kubernetes.io/projected/a95995a7-92e3-40c0-8fad-30e47ea759e1-kube-api-access-wvblx\") pod \"cinder-operator-controller-manager-859b6ccc6-t42g2\" (UID: \"a95995a7-92e3-40c0-8fad-30e47ea759e1\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-t42g2" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.340546 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-p8zs5"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.369347 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-sqqtl"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.403778 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-sqqtl" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.407572 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvblx\" (UniqueName: \"kubernetes.io/projected/a95995a7-92e3-40c0-8fad-30e47ea759e1-kube-api-access-wvblx\") pod \"cinder-operator-controller-manager-859b6ccc6-t42g2\" (UID: \"a95995a7-92e3-40c0-8fad-30e47ea759e1\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-t42g2" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.407705 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-lr2g9"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.408323 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjhst\" (UniqueName: \"kubernetes.io/projected/8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9-kube-api-access-kjhst\") pod \"infra-operator-controller-manager-57548d458d-b47qg\" (UID: \"8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-b47qg" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.408397 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9-cert\") pod \"infra-operator-controller-manager-57548d458d-b47qg\" (UID: \"8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-b47qg" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.408435 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kz2m\" (UniqueName: \"kubernetes.io/projected/9d1a9c70-0d24-476f-a857-b06e637e24b5-kube-api-access-5kz2m\") pod \"keystone-operator-controller-manager-7765d96ddf-lr2g9\" (UID: \"9d1a9c70-0d24-476f-a857-b06e637e24b5\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-lr2g9" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.408473 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpxsr\" (UniqueName: \"kubernetes.io/projected/45e4a6f1-ff34-4e14-8a58-c3e88d998169-kube-api-access-mpxsr\") pod \"ironic-operator-controller-manager-6c548fd776-p8zs5\" (UID: \"45e4a6f1-ff34-4e14-8a58-c3e88d998169\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-p8zs5" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.408523 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glkrj\" (UniqueName: \"kubernetes.io/projected/f54f5881-49fa-4cfa-88d9-20d0b0d9c082-kube-api-access-glkrj\") pod \"heat-operator-controller-manager-5f64f6f8bb-h8nnl\" (UID: \"f54f5881-49fa-4cfa-88d9-20d0b0d9c082\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-h8nnl" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.408553 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwqx6\" (UniqueName: \"kubernetes.io/projected/53d5cea9-4a5f-4663-8511-4e830d5c86bc-kube-api-access-mwqx6\") pod \"horizon-operator-controller-manager-68c6d99b8f-fqcx2\" (UID: \"53d5cea9-4a5f-4663-8511-4e830d5c86bc\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-fqcx2" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.408664 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxj2c\" (UniqueName: \"kubernetes.io/projected/c19a83c4-e130-47a2-81d2-04dbea61d6c1-kube-api-access-bxj2c\") pod \"glance-operator-controller-manager-77987cd8cd-wgzhs\" (UID: \"c19a83c4-e130-47a2-81d2-04dbea61d6c1\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-wgzhs" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.408717 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxnxd\" (UniqueName: \"kubernetes.io/projected/63a6f5c3-f437-478f-b72c-afcae7a4dba8-kube-api-access-mxnxd\") pod \"designate-operator-controller-manager-78b4bc895b-t29m4\" (UID: \"63a6f5c3-f437-478f-b72c-afcae7a4dba8\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-t29m4" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.415838 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-t42g2" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.419989 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzj57\" (UniqueName: \"kubernetes.io/projected/e908e515-9470-4a27-912f-a266a4ffe3a9-kube-api-access-gzj57\") pod \"barbican-operator-controller-manager-7d9dfd778-6fgdz\" (UID: \"e908e515-9470-4a27-912f-a266a4ffe3a9\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-6fgdz" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.424223 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-6fgdz" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.447245 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-mmmbg" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.473703 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-sqqtl"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.476116 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxnxd\" (UniqueName: \"kubernetes.io/projected/63a6f5c3-f437-478f-b72c-afcae7a4dba8-kube-api-access-mxnxd\") pod \"designate-operator-controller-manager-78b4bc895b-t29m4\" (UID: \"63a6f5c3-f437-478f-b72c-afcae7a4dba8\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-t29m4" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.482089 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxj2c\" (UniqueName: \"kubernetes.io/projected/c19a83c4-e130-47a2-81d2-04dbea61d6c1-kube-api-access-bxj2c\") pod \"glance-operator-controller-manager-77987cd8cd-wgzhs\" (UID: \"c19a83c4-e130-47a2-81d2-04dbea61d6c1\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-wgzhs" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.483049 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glkrj\" (UniqueName: \"kubernetes.io/projected/f54f5881-49fa-4cfa-88d9-20d0b0d9c082-kube-api-access-glkrj\") pod \"heat-operator-controller-manager-5f64f6f8bb-h8nnl\" (UID: \"f54f5881-49fa-4cfa-88d9-20d0b0d9c082\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-h8nnl" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.497074 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9mdd5"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.498274 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9mdd5" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.504794 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-tbf8g" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.511003 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjhst\" (UniqueName: \"kubernetes.io/projected/8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9-kube-api-access-kjhst\") pod \"infra-operator-controller-manager-57548d458d-b47qg\" (UID: \"8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-b47qg" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.511052 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9-cert\") pod \"infra-operator-controller-manager-57548d458d-b47qg\" (UID: \"8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-b47qg" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.511087 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz8pj\" (UniqueName: \"kubernetes.io/projected/55e02f15-1f53-4eb9-84fb-61a260485ebf-kube-api-access-kz8pj\") pod \"manila-operator-controller-manager-7c79b5df47-sqqtl\" (UID: \"55e02f15-1f53-4eb9-84fb-61a260485ebf\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-sqqtl" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.511107 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kz2m\" (UniqueName: \"kubernetes.io/projected/9d1a9c70-0d24-476f-a857-b06e637e24b5-kube-api-access-5kz2m\") pod \"keystone-operator-controller-manager-7765d96ddf-lr2g9\" (UID: \"9d1a9c70-0d24-476f-a857-b06e637e24b5\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-lr2g9" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.511127 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpxsr\" (UniqueName: \"kubernetes.io/projected/45e4a6f1-ff34-4e14-8a58-c3e88d998169-kube-api-access-mpxsr\") pod \"ironic-operator-controller-manager-6c548fd776-p8zs5\" (UID: \"45e4a6f1-ff34-4e14-8a58-c3e88d998169\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-p8zs5" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.511154 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwqx6\" (UniqueName: \"kubernetes.io/projected/53d5cea9-4a5f-4663-8511-4e830d5c86bc-kube-api-access-mwqx6\") pod \"horizon-operator-controller-manager-68c6d99b8f-fqcx2\" (UID: \"53d5cea9-4a5f-4663-8511-4e830d5c86bc\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-fqcx2" Dec 05 01:29:30 crc kubenswrapper[4990]: E1205 01:29:30.511690 4990 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 05 01:29:30 crc kubenswrapper[4990]: E1205 01:29:30.511738 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9-cert podName:8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9 nodeName:}" failed. No retries permitted until 2025-12-05 01:29:31.011723178 +0000 UTC m=+1269.387938539 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9-cert") pod "infra-operator-controller-manager-57548d458d-b47qg" (UID: "8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9") : secret "infra-operator-webhook-server-cert" not found Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.517602 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-88qgd"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.517921 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-h8nnl" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.518757 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-88qgd" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.522166 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-5qmhr" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.543979 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjhst\" (UniqueName: \"kubernetes.io/projected/8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9-kube-api-access-kjhst\") pod \"infra-operator-controller-manager-57548d458d-b47qg\" (UID: \"8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-b47qg" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.561673 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9mdd5"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.562266 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpxsr\" (UniqueName: \"kubernetes.io/projected/45e4a6f1-ff34-4e14-8a58-c3e88d998169-kube-api-access-mpxsr\") pod \"ironic-operator-controller-manager-6c548fd776-p8zs5\" (UID: \"45e4a6f1-ff34-4e14-8a58-c3e88d998169\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-p8zs5" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.578152 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwqx6\" (UniqueName: \"kubernetes.io/projected/53d5cea9-4a5f-4663-8511-4e830d5c86bc-kube-api-access-mwqx6\") pod \"horizon-operator-controller-manager-68c6d99b8f-fqcx2\" (UID: \"53d5cea9-4a5f-4663-8511-4e830d5c86bc\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-fqcx2" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.578626 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-88qgd"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.578720 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kz2m\" (UniqueName: \"kubernetes.io/projected/9d1a9c70-0d24-476f-a857-b06e637e24b5-kube-api-access-5kz2m\") pod \"keystone-operator-controller-manager-7765d96ddf-lr2g9\" (UID: \"9d1a9c70-0d24-476f-a857-b06e637e24b5\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-lr2g9" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.595939 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-p8zs5" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.601888 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-mhqsp"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.603010 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-mhqsp" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.605505 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-9g89g" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.611768 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz8pj\" (UniqueName: \"kubernetes.io/projected/55e02f15-1f53-4eb9-84fb-61a260485ebf-kube-api-access-kz8pj\") pod \"manila-operator-controller-manager-7c79b5df47-sqqtl\" (UID: \"55e02f15-1f53-4eb9-84fb-61a260485ebf\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-sqqtl" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.611870 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk8vg\" (UniqueName: \"kubernetes.io/projected/f6574447-fe34-4fc6-a99d-8f9898a73019-kube-api-access-qk8vg\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-9mdd5\" (UID: \"f6574447-fe34-4fc6-a99d-8f9898a73019\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9mdd5" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.627316 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-k7zzp"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.628523 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-k7zzp" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.634663 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-hmqdj" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.640597 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz8pj\" (UniqueName: \"kubernetes.io/projected/55e02f15-1f53-4eb9-84fb-61a260485ebf-kube-api-access-kz8pj\") pod \"manila-operator-controller-manager-7c79b5df47-sqqtl\" (UID: \"55e02f15-1f53-4eb9-84fb-61a260485ebf\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-sqqtl" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.642350 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-lr2g9" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.651962 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-mhqsp"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.661083 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-k7zzp"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.680593 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.681625 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-wk9np"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.682343 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wk9np" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.682765 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.687594 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.688216 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-f588l" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.688345 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-xdjh8" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.695645 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-wk9np"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.700567 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-gc6bg"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.701662 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-gc6bg" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.714323 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5bdv\" (UniqueName: \"kubernetes.io/projected/b7da1eba-30c5-45a4-819d-7aef2af480c8-kube-api-access-b5bdv\") pod \"mariadb-operator-controller-manager-56bbcc9d85-88qgd\" (UID: \"b7da1eba-30c5-45a4-819d-7aef2af480c8\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-88qgd" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.714370 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk8vg\" (UniqueName: \"kubernetes.io/projected/f6574447-fe34-4fc6-a99d-8f9898a73019-kube-api-access-qk8vg\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-9mdd5\" (UID: \"f6574447-fe34-4fc6-a99d-8f9898a73019\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9mdd5" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.714708 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntd52\" (UniqueName: \"kubernetes.io/projected/a25b5669-b148-428b-a654-4a1effd836f5-kube-api-access-ntd52\") pod \"nova-operator-controller-manager-697bc559fc-mhqsp\" (UID: \"a25b5669-b148-428b-a654-4a1effd836f5\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-mhqsp" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.718068 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-pgmv9" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.734435 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-t29m4" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.746964 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk8vg\" (UniqueName: \"kubernetes.io/projected/f6574447-fe34-4fc6-a99d-8f9898a73019-kube-api-access-qk8vg\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-9mdd5\" (UID: \"f6574447-fe34-4fc6-a99d-8f9898a73019\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9mdd5" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.756374 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.765818 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-wgzhs" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.778638 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-gc6bg"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.817922 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkh7l\" (UniqueName: \"kubernetes.io/projected/58d8a7d1-7337-4d4f-ae63-04862be6a86a-kube-api-access-dkh7l\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt\" (UID: \"58d8a7d1-7337-4d4f-ae63-04862be6a86a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.817991 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntd52\" (UniqueName: \"kubernetes.io/projected/a25b5669-b148-428b-a654-4a1effd836f5-kube-api-access-ntd52\") pod \"nova-operator-controller-manager-697bc559fc-mhqsp\" (UID: \"a25b5669-b148-428b-a654-4a1effd836f5\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-mhqsp" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.818020 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58d8a7d1-7337-4d4f-ae63-04862be6a86a-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt\" (UID: \"58d8a7d1-7337-4d4f-ae63-04862be6a86a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.818125 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4w7v\" (UniqueName: \"kubernetes.io/projected/b80a8ebf-4453-4f97-9bc3-9c3d8371b868-kube-api-access-p4w7v\") pod \"placement-operator-controller-manager-78f8948974-gc6bg\" (UID: \"b80a8ebf-4453-4f97-9bc3-9c3d8371b868\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-gc6bg" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.818510 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxbb9\" (UniqueName: \"kubernetes.io/projected/4ac7ce06-d864-4577-a628-201945f57f8a-kube-api-access-lxbb9\") pod \"octavia-operator-controller-manager-998648c74-k7zzp\" (UID: \"4ac7ce06-d864-4577-a628-201945f57f8a\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-k7zzp" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.818568 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v56md\" (UniqueName: \"kubernetes.io/projected/438c87d1-af5c-42ee-988c-82d88ebd6439-kube-api-access-v56md\") pod \"ovn-operator-controller-manager-b6456fdb6-wk9np\" (UID: \"438c87d1-af5c-42ee-988c-82d88ebd6439\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wk9np" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.818646 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5bdv\" (UniqueName: \"kubernetes.io/projected/b7da1eba-30c5-45a4-819d-7aef2af480c8-kube-api-access-b5bdv\") pod \"mariadb-operator-controller-manager-56bbcc9d85-88qgd\" (UID: \"b7da1eba-30c5-45a4-819d-7aef2af480c8\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-88qgd" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.833985 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-fqcx2" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.838938 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-4fl28"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.840195 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntd52\" (UniqueName: \"kubernetes.io/projected/a25b5669-b148-428b-a654-4a1effd836f5-kube-api-access-ntd52\") pod \"nova-operator-controller-manager-697bc559fc-mhqsp\" (UID: \"a25b5669-b148-428b-a654-4a1effd836f5\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-mhqsp" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.841652 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-4fl28" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.843975 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-4qd8w" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.844548 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5bdv\" (UniqueName: \"kubernetes.io/projected/b7da1eba-30c5-45a4-819d-7aef2af480c8-kube-api-access-b5bdv\") pod \"mariadb-operator-controller-manager-56bbcc9d85-88qgd\" (UID: \"b7da1eba-30c5-45a4-819d-7aef2af480c8\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-88qgd" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.856037 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-jfnfg"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.858975 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-jfnfg" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.861379 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-rtmg2" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.881458 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-sqqtl" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.896289 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-4fl28"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.897704 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-jfnfg"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.911553 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9mdd5" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.920337 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58d8a7d1-7337-4d4f-ae63-04862be6a86a-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt\" (UID: \"58d8a7d1-7337-4d4f-ae63-04862be6a86a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.922670 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4w7v\" (UniqueName: \"kubernetes.io/projected/b80a8ebf-4453-4f97-9bc3-9c3d8371b868-kube-api-access-p4w7v\") pod \"placement-operator-controller-manager-78f8948974-gc6bg\" (UID: \"b80a8ebf-4453-4f97-9bc3-9c3d8371b868\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-gc6bg" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.922786 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxbb9\" (UniqueName: \"kubernetes.io/projected/4ac7ce06-d864-4577-a628-201945f57f8a-kube-api-access-lxbb9\") pod \"octavia-operator-controller-manager-998648c74-k7zzp\" (UID: \"4ac7ce06-d864-4577-a628-201945f57f8a\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-k7zzp" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.922878 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v56md\" (UniqueName: \"kubernetes.io/projected/438c87d1-af5c-42ee-988c-82d88ebd6439-kube-api-access-v56md\") pod \"ovn-operator-controller-manager-b6456fdb6-wk9np\" (UID: \"438c87d1-af5c-42ee-988c-82d88ebd6439\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wk9np" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.923070 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkh7l\" (UniqueName: \"kubernetes.io/projected/58d8a7d1-7337-4d4f-ae63-04862be6a86a-kube-api-access-dkh7l\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt\" (UID: \"58d8a7d1-7337-4d4f-ae63-04862be6a86a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt" Dec 05 01:29:30 crc kubenswrapper[4990]: E1205 01:29:30.922734 4990 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 01:29:30 crc kubenswrapper[4990]: E1205 01:29:30.923312 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58d8a7d1-7337-4d4f-ae63-04862be6a86a-cert podName:58d8a7d1-7337-4d4f-ae63-04862be6a86a nodeName:}" failed. No retries permitted until 2025-12-05 01:29:31.423292956 +0000 UTC m=+1269.799508317 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58d8a7d1-7337-4d4f-ae63-04862be6a86a-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt" (UID: "58d8a7d1-7337-4d4f-ae63-04862be6a86a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.923918 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-88qgd" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.928911 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-lfhcj"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.932476 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-lfhcj" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.942012 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-8ptpz" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.943556 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-lfhcj"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.944326 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-mhqsp" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.946515 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxbb9\" (UniqueName: \"kubernetes.io/projected/4ac7ce06-d864-4577-a628-201945f57f8a-kube-api-access-lxbb9\") pod \"octavia-operator-controller-manager-998648c74-k7zzp\" (UID: \"4ac7ce06-d864-4577-a628-201945f57f8a\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-k7zzp" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.947027 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v56md\" (UniqueName: \"kubernetes.io/projected/438c87d1-af5c-42ee-988c-82d88ebd6439-kube-api-access-v56md\") pod \"ovn-operator-controller-manager-b6456fdb6-wk9np\" (UID: \"438c87d1-af5c-42ee-988c-82d88ebd6439\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wk9np" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.947322 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkh7l\" (UniqueName: \"kubernetes.io/projected/58d8a7d1-7337-4d4f-ae63-04862be6a86a-kube-api-access-dkh7l\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt\" (UID: \"58d8a7d1-7337-4d4f-ae63-04862be6a86a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.950874 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-k7zzp" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.965588 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-mv9jg"] Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.968198 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4w7v\" (UniqueName: \"kubernetes.io/projected/b80a8ebf-4453-4f97-9bc3-9c3d8371b868-kube-api-access-p4w7v\") pod \"placement-operator-controller-manager-78f8948974-gc6bg\" (UID: \"b80a8ebf-4453-4f97-9bc3-9c3d8371b868\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-gc6bg" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.969307 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-mv9jg" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.971065 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-hwpwl" Dec 05 01:29:30 crc kubenswrapper[4990]: I1205 01:29:30.993433 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-mv9jg"] Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.030666 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9-cert\") pod \"infra-operator-controller-manager-57548d458d-b47qg\" (UID: \"8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-b47qg" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.032249 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fvb6\" (UniqueName: \"kubernetes.io/projected/683b019b-d147-4c85-b537-e4000a14dfed-kube-api-access-5fvb6\") pod \"swift-operator-controller-manager-5f8c65bbfc-4fl28\" (UID: \"683b019b-d147-4c85-b537-e4000a14dfed\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-4fl28" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.032297 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spbm8\" (UniqueName: \"kubernetes.io/projected/4879650d-849b-496e-b8de-92dde4a62982-kube-api-access-spbm8\") pod \"telemetry-operator-controller-manager-76cc84c6bb-jfnfg\" (UID: \"4879650d-849b-496e-b8de-92dde4a62982\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-jfnfg" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.032314 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29pgw\" (UniqueName: \"kubernetes.io/projected/bd1e0999-c2d5-4712-b995-18e7778231cf-kube-api-access-29pgw\") pod \"test-operator-controller-manager-5854674fcc-lfhcj\" (UID: \"bd1e0999-c2d5-4712-b995-18e7778231cf\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-lfhcj" Dec 05 01:29:31 crc kubenswrapper[4990]: E1205 01:29:31.030844 4990 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 05 01:29:31 crc kubenswrapper[4990]: E1205 01:29:31.032800 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9-cert podName:8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9 nodeName:}" failed. No retries permitted until 2025-12-05 01:29:32.032428863 +0000 UTC m=+1270.408644224 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9-cert") pod "infra-operator-controller-manager-57548d458d-b47qg" (UID: "8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9") : secret "infra-operator-webhook-server-cert" not found Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.039263 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wk9np" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.060847 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-79966545b7-krksl"] Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.061716 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-79966545b7-krksl" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.067410 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.067460 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-jb2pn" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.067669 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.084320 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-79966545b7-krksl"] Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.096846 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-gc6bg" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.122634 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6g29k"] Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.123508 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6g29k" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.125735 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-vgmxx" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.130457 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6g29k"] Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.133415 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn6gw\" (UniqueName: \"kubernetes.io/projected/4aeaf936-39c4-4558-bba8-c47839e79431-kube-api-access-sn6gw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-6g29k\" (UID: \"4aeaf936-39c4-4558-bba8-c47839e79431\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6g29k" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.133476 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fvb6\" (UniqueName: \"kubernetes.io/projected/683b019b-d147-4c85-b537-e4000a14dfed-kube-api-access-5fvb6\") pod \"swift-operator-controller-manager-5f8c65bbfc-4fl28\" (UID: \"683b019b-d147-4c85-b537-e4000a14dfed\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-4fl28" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.133527 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spbm8\" (UniqueName: \"kubernetes.io/projected/4879650d-849b-496e-b8de-92dde4a62982-kube-api-access-spbm8\") pod \"telemetry-operator-controller-manager-76cc84c6bb-jfnfg\" (UID: \"4879650d-849b-496e-b8de-92dde4a62982\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-jfnfg" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.133544 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29pgw\" (UniqueName: \"kubernetes.io/projected/bd1e0999-c2d5-4712-b995-18e7778231cf-kube-api-access-29pgw\") pod \"test-operator-controller-manager-5854674fcc-lfhcj\" (UID: \"bd1e0999-c2d5-4712-b995-18e7778231cf\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-lfhcj" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.134269 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-webhook-certs\") pod \"openstack-operator-controller-manager-79966545b7-krksl\" (UID: \"dec00109-2be0-4153-86df-7ad985b1f396\") " pod="openstack-operators/openstack-operator-controller-manager-79966545b7-krksl" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.134297 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z6m5\" (UniqueName: \"kubernetes.io/projected/dec00109-2be0-4153-86df-7ad985b1f396-kube-api-access-7z6m5\") pod \"openstack-operator-controller-manager-79966545b7-krksl\" (UID: \"dec00109-2be0-4153-86df-7ad985b1f396\") " pod="openstack-operators/openstack-operator-controller-manager-79966545b7-krksl" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.134332 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-metrics-certs\") pod \"openstack-operator-controller-manager-79966545b7-krksl\" (UID: \"dec00109-2be0-4153-86df-7ad985b1f396\") " pod="openstack-operators/openstack-operator-controller-manager-79966545b7-krksl" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.134353 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c2vx\" (UniqueName: \"kubernetes.io/projected/307385f4-34c6-473a-a3d6-c0be9a334b68-kube-api-access-4c2vx\") pod \"watcher-operator-controller-manager-769dc69bc-mv9jg\" (UID: \"307385f4-34c6-473a-a3d6-c0be9a334b68\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-mv9jg" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.153686 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29pgw\" (UniqueName: \"kubernetes.io/projected/bd1e0999-c2d5-4712-b995-18e7778231cf-kube-api-access-29pgw\") pod \"test-operator-controller-manager-5854674fcc-lfhcj\" (UID: \"bd1e0999-c2d5-4712-b995-18e7778231cf\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-lfhcj" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.157621 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fvb6\" (UniqueName: \"kubernetes.io/projected/683b019b-d147-4c85-b537-e4000a14dfed-kube-api-access-5fvb6\") pod \"swift-operator-controller-manager-5f8c65bbfc-4fl28\" (UID: \"683b019b-d147-4c85-b537-e4000a14dfed\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-4fl28" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.158082 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spbm8\" (UniqueName: \"kubernetes.io/projected/4879650d-849b-496e-b8de-92dde4a62982-kube-api-access-spbm8\") pod \"telemetry-operator-controller-manager-76cc84c6bb-jfnfg\" (UID: \"4879650d-849b-496e-b8de-92dde4a62982\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-jfnfg" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.163574 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-4fl28" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.213890 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-h8nnl"] Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.223998 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-t42g2"] Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.238699 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-metrics-certs\") pod \"openstack-operator-controller-manager-79966545b7-krksl\" (UID: \"dec00109-2be0-4153-86df-7ad985b1f396\") " pod="openstack-operators/openstack-operator-controller-manager-79966545b7-krksl" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.238763 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c2vx\" (UniqueName: \"kubernetes.io/projected/307385f4-34c6-473a-a3d6-c0be9a334b68-kube-api-access-4c2vx\") pod \"watcher-operator-controller-manager-769dc69bc-mv9jg\" (UID: \"307385f4-34c6-473a-a3d6-c0be9a334b68\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-mv9jg" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.238845 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn6gw\" (UniqueName: \"kubernetes.io/projected/4aeaf936-39c4-4558-bba8-c47839e79431-kube-api-access-sn6gw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-6g29k\" (UID: \"4aeaf936-39c4-4558-bba8-c47839e79431\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6g29k" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.238918 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-webhook-certs\") pod \"openstack-operator-controller-manager-79966545b7-krksl\" (UID: \"dec00109-2be0-4153-86df-7ad985b1f396\") " pod="openstack-operators/openstack-operator-controller-manager-79966545b7-krksl" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.238942 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z6m5\" (UniqueName: \"kubernetes.io/projected/dec00109-2be0-4153-86df-7ad985b1f396-kube-api-access-7z6m5\") pod \"openstack-operator-controller-manager-79966545b7-krksl\" (UID: \"dec00109-2be0-4153-86df-7ad985b1f396\") " pod="openstack-operators/openstack-operator-controller-manager-79966545b7-krksl" Dec 05 01:29:31 crc kubenswrapper[4990]: E1205 01:29:31.239294 4990 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 05 01:29:31 crc kubenswrapper[4990]: E1205 01:29:31.239336 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-metrics-certs podName:dec00109-2be0-4153-86df-7ad985b1f396 nodeName:}" failed. No retries permitted until 2025-12-05 01:29:31.739323293 +0000 UTC m=+1270.115538654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-metrics-certs") pod "openstack-operator-controller-manager-79966545b7-krksl" (UID: "dec00109-2be0-4153-86df-7ad985b1f396") : secret "metrics-server-cert" not found Dec 05 01:29:31 crc kubenswrapper[4990]: E1205 01:29:31.239808 4990 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 05 01:29:31 crc kubenswrapper[4990]: E1205 01:29:31.239841 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-webhook-certs podName:dec00109-2be0-4153-86df-7ad985b1f396 nodeName:}" failed. No retries permitted until 2025-12-05 01:29:31.739832358 +0000 UTC m=+1270.116047719 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-webhook-certs") pod "openstack-operator-controller-manager-79966545b7-krksl" (UID: "dec00109-2be0-4153-86df-7ad985b1f396") : secret "webhook-server-cert" not found Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.241851 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-jfnfg" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.265974 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c2vx\" (UniqueName: \"kubernetes.io/projected/307385f4-34c6-473a-a3d6-c0be9a334b68-kube-api-access-4c2vx\") pod \"watcher-operator-controller-manager-769dc69bc-mv9jg\" (UID: \"307385f4-34c6-473a-a3d6-c0be9a334b68\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-mv9jg" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.266411 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn6gw\" (UniqueName: \"kubernetes.io/projected/4aeaf936-39c4-4558-bba8-c47839e79431-kube-api-access-sn6gw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-6g29k\" (UID: \"4aeaf936-39c4-4558-bba8-c47839e79431\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6g29k" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.268256 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z6m5\" (UniqueName: \"kubernetes.io/projected/dec00109-2be0-4153-86df-7ad985b1f396-kube-api-access-7z6m5\") pod \"openstack-operator-controller-manager-79966545b7-krksl\" (UID: \"dec00109-2be0-4153-86df-7ad985b1f396\") " pod="openstack-operators/openstack-operator-controller-manager-79966545b7-krksl" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.300877 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-lfhcj" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.306581 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-mv9jg" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.357958 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-p8zs5"] Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.375640 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-6fgdz"] Dec 05 01:29:31 crc kubenswrapper[4990]: W1205 01:29:31.381304 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode908e515_9470_4a27_912f_a266a4ffe3a9.slice/crio-2cef19eb3e8201f36222840b35f5f74ce166ed77af11de7275f749c3fe09cc28 WatchSource:0}: Error finding container 2cef19eb3e8201f36222840b35f5f74ce166ed77af11de7275f749c3fe09cc28: Status 404 returned error can't find the container with id 2cef19eb3e8201f36222840b35f5f74ce166ed77af11de7275f749c3fe09cc28 Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.444346 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58d8a7d1-7337-4d4f-ae63-04862be6a86a-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt\" (UID: \"58d8a7d1-7337-4d4f-ae63-04862be6a86a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt" Dec 05 01:29:31 crc kubenswrapper[4990]: E1205 01:29:31.444553 4990 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 01:29:31 crc kubenswrapper[4990]: E1205 01:29:31.444605 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58d8a7d1-7337-4d4f-ae63-04862be6a86a-cert podName:58d8a7d1-7337-4d4f-ae63-04862be6a86a nodeName:}" failed. No retries permitted until 2025-12-05 01:29:32.444589888 +0000 UTC m=+1270.820805249 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58d8a7d1-7337-4d4f-ae63-04862be6a86a-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt" (UID: "58d8a7d1-7337-4d4f-ae63-04862be6a86a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.450852 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6g29k" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.504208 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-lr2g9"] Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.576640 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-wgzhs"] Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.586545 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-k7zzp"] Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.590838 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9mdd5"] Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.601524 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-t29m4"] Dec 05 01:29:31 crc kubenswrapper[4990]: W1205 01:29:31.627747 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6574447_fe34_4fc6_a99d_8f9898a73019.slice/crio-28e38761ab61027f59901544f1f42f002dfd566cbb037dc312dd9e36fbf8e091 WatchSource:0}: Error finding container 28e38761ab61027f59901544f1f42f002dfd566cbb037dc312dd9e36fbf8e091: Status 404 returned error can't find the container with id 28e38761ab61027f59901544f1f42f002dfd566cbb037dc312dd9e36fbf8e091 Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.692935 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-wgzhs" event={"ID":"c19a83c4-e130-47a2-81d2-04dbea61d6c1","Type":"ContainerStarted","Data":"415f0de8e6b0470853236b9c2c8e6088aa99242fa28cdf27cfce43657bb8a4c3"} Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.693859 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-lr2g9" event={"ID":"9d1a9c70-0d24-476f-a857-b06e637e24b5","Type":"ContainerStarted","Data":"c342f4434c66336e28f14a3a86f8557ee8e2cfe95f983d2b283ac06104b274c6"} Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.695087 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-6fgdz" event={"ID":"e908e515-9470-4a27-912f-a266a4ffe3a9","Type":"ContainerStarted","Data":"2cef19eb3e8201f36222840b35f5f74ce166ed77af11de7275f749c3fe09cc28"} Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.696561 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-t29m4" event={"ID":"63a6f5c3-f437-478f-b72c-afcae7a4dba8","Type":"ContainerStarted","Data":"51c407d02a0f7a9069dad7c571f3585bd90ad8c0c1cf3afc971b917c4199500e"} Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.701611 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9mdd5" event={"ID":"f6574447-fe34-4fc6-a99d-8f9898a73019","Type":"ContainerStarted","Data":"28e38761ab61027f59901544f1f42f002dfd566cbb037dc312dd9e36fbf8e091"} Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.702535 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-p8zs5" event={"ID":"45e4a6f1-ff34-4e14-8a58-c3e88d998169","Type":"ContainerStarted","Data":"e72c0ffa8f2d3b924a02aecd369bb270ac791fffb3366a661a4ebed70f812ad2"} Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.703725 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-h8nnl" event={"ID":"f54f5881-49fa-4cfa-88d9-20d0b0d9c082","Type":"ContainerStarted","Data":"09aac5b5eabdd7166a683978558781b4dccf103b2afed184d9edd021554aa58c"} Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.704760 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-k7zzp" event={"ID":"4ac7ce06-d864-4577-a628-201945f57f8a","Type":"ContainerStarted","Data":"e88eb10dd05d8342bc26f4578264ae1d80f50671eb81da6040416dc51f36ca45"} Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.705620 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-t42g2" event={"ID":"a95995a7-92e3-40c0-8fad-30e47ea759e1","Type":"ContainerStarted","Data":"fa9b366bcdedaccf88fdcb78b5ff3208f038fb5748170d80ab0fc20247f83350"} Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.750120 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-webhook-certs\") pod \"openstack-operator-controller-manager-79966545b7-krksl\" (UID: \"dec00109-2be0-4153-86df-7ad985b1f396\") " pod="openstack-operators/openstack-operator-controller-manager-79966545b7-krksl" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.750172 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-metrics-certs\") pod \"openstack-operator-controller-manager-79966545b7-krksl\" (UID: \"dec00109-2be0-4153-86df-7ad985b1f396\") " pod="openstack-operators/openstack-operator-controller-manager-79966545b7-krksl" Dec 05 01:29:31 crc kubenswrapper[4990]: E1205 01:29:31.750368 4990 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 05 01:29:31 crc kubenswrapper[4990]: E1205 01:29:31.750429 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-metrics-certs podName:dec00109-2be0-4153-86df-7ad985b1f396 nodeName:}" failed. No retries permitted until 2025-12-05 01:29:32.750407504 +0000 UTC m=+1271.126622865 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-metrics-certs") pod "openstack-operator-controller-manager-79966545b7-krksl" (UID: "dec00109-2be0-4153-86df-7ad985b1f396") : secret "metrics-server-cert" not found Dec 05 01:29:31 crc kubenswrapper[4990]: E1205 01:29:31.750654 4990 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 05 01:29:31 crc kubenswrapper[4990]: E1205 01:29:31.750759 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-webhook-certs podName:dec00109-2be0-4153-86df-7ad985b1f396 nodeName:}" failed. No retries permitted until 2025-12-05 01:29:32.750733223 +0000 UTC m=+1271.126948584 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-webhook-certs") pod "openstack-operator-controller-manager-79966545b7-krksl" (UID: "dec00109-2be0-4153-86df-7ad985b1f396") : secret "webhook-server-cert" not found Dec 05 01:29:31 crc kubenswrapper[4990]: W1205 01:29:31.769333 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53d5cea9_4a5f_4663_8511_4e830d5c86bc.slice/crio-298e128625add6c10d1f0b264dcd5d9ec6705054b5722b2d655026dd932c9d0e WatchSource:0}: Error finding container 298e128625add6c10d1f0b264dcd5d9ec6705054b5722b2d655026dd932c9d0e: Status 404 returned error can't find the container with id 298e128625add6c10d1f0b264dcd5d9ec6705054b5722b2d655026dd932c9d0e Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.776409 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-fqcx2"] Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.788175 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-mhqsp"] Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.794912 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-sqqtl"] Dec 05 01:29:31 crc kubenswrapper[4990]: W1205 01:29:31.800318 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55e02f15_1f53_4eb9_84fb_61a260485ebf.slice/crio-b1545e194279c5db37abb499d4e99a68a6aacf25b7d1c9c6dc2777cd08f24c11 WatchSource:0}: Error finding container b1545e194279c5db37abb499d4e99a68a6aacf25b7d1c9c6dc2777cd08f24c11: Status 404 returned error can't find the container with id b1545e194279c5db37abb499d4e99a68a6aacf25b7d1c9c6dc2777cd08f24c11 Dec 05 01:29:31 crc kubenswrapper[4990]: W1205 01:29:31.801136 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7da1eba_30c5_45a4_819d_7aef2af480c8.slice/crio-93c86750825f675e80a88fb1b4bc8ea2b73b909b664cfc18ae7ae8e40046c7df WatchSource:0}: Error finding container 93c86750825f675e80a88fb1b4bc8ea2b73b909b664cfc18ae7ae8e40046c7df: Status 404 returned error can't find the container with id 93c86750825f675e80a88fb1b4bc8ea2b73b909b664cfc18ae7ae8e40046c7df Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.802296 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-88qgd"] Dec 05 01:29:31 crc kubenswrapper[4990]: E1205 01:29:31.803709 4990 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kz8pj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7c79b5df47-sqqtl_openstack-operators(55e02f15-1f53-4eb9-84fb-61a260485ebf): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 01:29:31 crc kubenswrapper[4990]: E1205 01:29:31.807291 4990 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kz8pj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7c79b5df47-sqqtl_openstack-operators(55e02f15-1f53-4eb9-84fb-61a260485ebf): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 01:29:31 crc kubenswrapper[4990]: E1205 01:29:31.808600 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-sqqtl" podUID="55e02f15-1f53-4eb9-84fb-61a260485ebf" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.812406 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-wk9np"] Dec 05 01:29:31 crc kubenswrapper[4990]: E1205 01:29:31.813389 4990 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b5bdv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-56bbcc9d85-88qgd_openstack-operators(b7da1eba-30c5-45a4-819d-7aef2af480c8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 01:29:31 crc kubenswrapper[4990]: E1205 01:29:31.817178 4990 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b5bdv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-56bbcc9d85-88qgd_openstack-operators(b7da1eba-30c5-45a4-819d-7aef2af480c8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 01:29:31 crc kubenswrapper[4990]: E1205 01:29:31.820318 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-88qgd" podUID="b7da1eba-30c5-45a4-819d-7aef2af480c8" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.876568 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-lfhcj"] Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.885382 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-gc6bg"] Dec 05 01:29:31 crc kubenswrapper[4990]: W1205 01:29:31.887724 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd1e0999_c2d5_4712_b995_18e7778231cf.slice/crio-d52cf1f09cd09c9c168872b24958609feca408c1328e8a227a6f3f06ec3fd607 WatchSource:0}: Error finding container d52cf1f09cd09c9c168872b24958609feca408c1328e8a227a6f3f06ec3fd607: Status 404 returned error can't find the container with id d52cf1f09cd09c9c168872b24958609feca408c1328e8a227a6f3f06ec3fd607 Dec 05 01:29:31 crc kubenswrapper[4990]: W1205 01:29:31.889818 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod683b019b_d147_4c85_b537_e4000a14dfed.slice/crio-f3083efe3bd5f529621d5416bc50b8f6c420ec54abc73c5993664de2a3aa7b42 WatchSource:0}: Error finding container f3083efe3bd5f529621d5416bc50b8f6c420ec54abc73c5993664de2a3aa7b42: Status 404 returned error can't find the container with id f3083efe3bd5f529621d5416bc50b8f6c420ec54abc73c5993664de2a3aa7b42 Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.891771 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-4fl28"] Dec 05 01:29:31 crc kubenswrapper[4990]: E1205 01:29:31.892307 4990 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5fvb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-4fl28_openstack-operators(683b019b-d147-4c85-b537-e4000a14dfed): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 01:29:31 crc kubenswrapper[4990]: W1205 01:29:31.892710 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb80a8ebf_4453_4f97_9bc3_9c3d8371b868.slice/crio-7370f5ff5c7fb4728799db615e858f6adb769ae834f60c76ba1a9d8ffc03904a WatchSource:0}: Error finding container 7370f5ff5c7fb4728799db615e858f6adb769ae834f60c76ba1a9d8ffc03904a: Status 404 returned error can't find the container with id 7370f5ff5c7fb4728799db615e858f6adb769ae834f60c76ba1a9d8ffc03904a Dec 05 01:29:31 crc kubenswrapper[4990]: E1205 01:29:31.894242 4990 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5fvb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-4fl28_openstack-operators(683b019b-d147-4c85-b537-e4000a14dfed): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 01:29:31 crc kubenswrapper[4990]: W1205 01:29:31.894689 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4879650d_849b_496e_b8de_92dde4a62982.slice/crio-82a71b919edb9a320510ab89f99574edb31b6f47284c7ab0dc801c199958634a WatchSource:0}: Error finding container 82a71b919edb9a320510ab89f99574edb31b6f47284c7ab0dc801c199958634a: Status 404 returned error can't find the container with id 82a71b919edb9a320510ab89f99574edb31b6f47284c7ab0dc801c199958634a Dec 05 01:29:31 crc kubenswrapper[4990]: E1205 01:29:31.894717 4990 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p4w7v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-gc6bg_openstack-operators(b80a8ebf-4453-4f97-9bc3-9c3d8371b868): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 01:29:31 crc kubenswrapper[4990]: E1205 01:29:31.895445 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-4fl28" podUID="683b019b-d147-4c85-b537-e4000a14dfed" Dec 05 01:29:31 crc kubenswrapper[4990]: I1205 01:29:31.899448 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-jfnfg"] Dec 05 01:29:31 crc kubenswrapper[4990]: E1205 01:29:31.900185 4990 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p4w7v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-gc6bg_openstack-operators(b80a8ebf-4453-4f97-9bc3-9c3d8371b868): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 01:29:31 crc kubenswrapper[4990]: E1205 01:29:31.901350 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-gc6bg" podUID="b80a8ebf-4453-4f97-9bc3-9c3d8371b868" Dec 05 01:29:31 crc kubenswrapper[4990]: E1205 01:29:31.901405 4990 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-spbm8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-jfnfg_openstack-operators(4879650d-849b-496e-b8de-92dde4a62982): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 01:29:31 crc kubenswrapper[4990]: E1205 01:29:31.903355 4990 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-spbm8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-jfnfg_openstack-operators(4879650d-849b-496e-b8de-92dde4a62982): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 01:29:31 crc kubenswrapper[4990]: E1205 01:29:31.905263 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-jfnfg" podUID="4879650d-849b-496e-b8de-92dde4a62982" Dec 05 01:29:32 crc kubenswrapper[4990]: I1205 01:29:32.006901 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-mv9jg"] Dec 05 01:29:32 crc kubenswrapper[4990]: W1205 01:29:32.012805 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod307385f4_34c6_473a_a3d6_c0be9a334b68.slice/crio-111ce737249538abf8819806a588ad01b00d2f0b13b2a74ea7fb8f37ea5ccea1 WatchSource:0}: Error finding container 111ce737249538abf8819806a588ad01b00d2f0b13b2a74ea7fb8f37ea5ccea1: Status 404 returned error can't find the container with id 111ce737249538abf8819806a588ad01b00d2f0b13b2a74ea7fb8f37ea5ccea1 Dec 05 01:29:32 crc kubenswrapper[4990]: E1205 01:29:32.015607 4990 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4c2vx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-mv9jg_openstack-operators(307385f4-34c6-473a-a3d6-c0be9a334b68): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 01:29:32 crc kubenswrapper[4990]: E1205 01:29:32.017908 4990 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4c2vx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-mv9jg_openstack-operators(307385f4-34c6-473a-a3d6-c0be9a334b68): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 01:29:32 crc kubenswrapper[4990]: E1205 01:29:32.019030 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-mv9jg" podUID="307385f4-34c6-473a-a3d6-c0be9a334b68" Dec 05 01:29:32 crc kubenswrapper[4990]: I1205 01:29:32.023652 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6g29k"] Dec 05 01:29:32 crc kubenswrapper[4990]: W1205 01:29:32.031054 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4aeaf936_39c4_4558_bba8_c47839e79431.slice/crio-e1dc6fe1d0802caca0aba8f008a8cbbd064b35d533af2fe7b8a4609e81c7fdc8 WatchSource:0}: Error finding container e1dc6fe1d0802caca0aba8f008a8cbbd064b35d533af2fe7b8a4609e81c7fdc8: Status 404 returned error can't find the container with id e1dc6fe1d0802caca0aba8f008a8cbbd064b35d533af2fe7b8a4609e81c7fdc8 Dec 05 01:29:32 crc kubenswrapper[4990]: E1205 01:29:32.034346 4990 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sn6gw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-6g29k_openstack-operators(4aeaf936-39c4-4558-bba8-c47839e79431): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 01:29:32 crc kubenswrapper[4990]: E1205 01:29:32.035540 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6g29k" podUID="4aeaf936-39c4-4558-bba8-c47839e79431" Dec 05 01:29:32 crc kubenswrapper[4990]: I1205 01:29:32.056774 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9-cert\") pod \"infra-operator-controller-manager-57548d458d-b47qg\" (UID: \"8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-b47qg" Dec 05 01:29:32 crc kubenswrapper[4990]: E1205 01:29:32.056973 4990 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 05 01:29:32 crc kubenswrapper[4990]: E1205 01:29:32.057072 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9-cert podName:8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9 nodeName:}" failed. No retries permitted until 2025-12-05 01:29:34.057045615 +0000 UTC m=+1272.433261046 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9-cert") pod "infra-operator-controller-manager-57548d458d-b47qg" (UID: "8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9") : secret "infra-operator-webhook-server-cert" not found Dec 05 01:29:32 crc kubenswrapper[4990]: I1205 01:29:32.467162 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58d8a7d1-7337-4d4f-ae63-04862be6a86a-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt\" (UID: \"58d8a7d1-7337-4d4f-ae63-04862be6a86a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt" Dec 05 01:29:32 crc kubenswrapper[4990]: E1205 01:29:32.468019 4990 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 01:29:32 crc kubenswrapper[4990]: E1205 01:29:32.468088 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58d8a7d1-7337-4d4f-ae63-04862be6a86a-cert podName:58d8a7d1-7337-4d4f-ae63-04862be6a86a nodeName:}" failed. No retries permitted until 2025-12-05 01:29:34.468059567 +0000 UTC m=+1272.844274918 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58d8a7d1-7337-4d4f-ae63-04862be6a86a-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt" (UID: "58d8a7d1-7337-4d4f-ae63-04862be6a86a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 01:29:32 crc kubenswrapper[4990]: I1205 01:29:32.714738 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-mhqsp" event={"ID":"a25b5669-b148-428b-a654-4a1effd836f5","Type":"ContainerStarted","Data":"48767bb57316544f86861a41de4ebf5f989961fbd90f759a005a42ea5a67b132"} Dec 05 01:29:32 crc kubenswrapper[4990]: I1205 01:29:32.716340 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-gc6bg" event={"ID":"b80a8ebf-4453-4f97-9bc3-9c3d8371b868","Type":"ContainerStarted","Data":"7370f5ff5c7fb4728799db615e858f6adb769ae834f60c76ba1a9d8ffc03904a"} Dec 05 01:29:32 crc kubenswrapper[4990]: I1205 01:29:32.731236 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-jfnfg" event={"ID":"4879650d-849b-496e-b8de-92dde4a62982","Type":"ContainerStarted","Data":"82a71b919edb9a320510ab89f99574edb31b6f47284c7ab0dc801c199958634a"} Dec 05 01:29:32 crc kubenswrapper[4990]: E1205 01:29:32.732048 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-gc6bg" podUID="b80a8ebf-4453-4f97-9bc3-9c3d8371b868" Dec 05 01:29:32 crc kubenswrapper[4990]: I1205 01:29:32.734998 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6g29k" event={"ID":"4aeaf936-39c4-4558-bba8-c47839e79431","Type":"ContainerStarted","Data":"e1dc6fe1d0802caca0aba8f008a8cbbd064b35d533af2fe7b8a4609e81c7fdc8"} Dec 05 01:29:32 crc kubenswrapper[4990]: E1205 01:29:32.735473 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-jfnfg" podUID="4879650d-849b-496e-b8de-92dde4a62982" Dec 05 01:29:32 crc kubenswrapper[4990]: E1205 01:29:32.736765 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6g29k" podUID="4aeaf936-39c4-4558-bba8-c47839e79431" Dec 05 01:29:32 crc kubenswrapper[4990]: I1205 01:29:32.750776 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-lfhcj" event={"ID":"bd1e0999-c2d5-4712-b995-18e7778231cf","Type":"ContainerStarted","Data":"d52cf1f09cd09c9c168872b24958609feca408c1328e8a227a6f3f06ec3fd607"} Dec 05 01:29:32 crc kubenswrapper[4990]: I1205 01:29:32.758581 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-fqcx2" event={"ID":"53d5cea9-4a5f-4663-8511-4e830d5c86bc","Type":"ContainerStarted","Data":"298e128625add6c10d1f0b264dcd5d9ec6705054b5722b2d655026dd932c9d0e"} Dec 05 01:29:32 crc kubenswrapper[4990]: I1205 01:29:32.773685 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wk9np" event={"ID":"438c87d1-af5c-42ee-988c-82d88ebd6439","Type":"ContainerStarted","Data":"04b88eb14403a1618632866c78d5e1f4af4608a8667d374c3311133c7d4131b9"} Dec 05 01:29:32 crc kubenswrapper[4990]: I1205 01:29:32.773868 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-webhook-certs\") pod \"openstack-operator-controller-manager-79966545b7-krksl\" (UID: \"dec00109-2be0-4153-86df-7ad985b1f396\") " pod="openstack-operators/openstack-operator-controller-manager-79966545b7-krksl" Dec 05 01:29:32 crc kubenswrapper[4990]: I1205 01:29:32.773916 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-metrics-certs\") pod \"openstack-operator-controller-manager-79966545b7-krksl\" (UID: \"dec00109-2be0-4153-86df-7ad985b1f396\") " pod="openstack-operators/openstack-operator-controller-manager-79966545b7-krksl" Dec 05 01:29:32 crc kubenswrapper[4990]: E1205 01:29:32.774066 4990 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 05 01:29:32 crc kubenswrapper[4990]: E1205 01:29:32.774115 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-metrics-certs podName:dec00109-2be0-4153-86df-7ad985b1f396 nodeName:}" failed. No retries permitted until 2025-12-05 01:29:34.774098991 +0000 UTC m=+1273.150314352 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-metrics-certs") pod "openstack-operator-controller-manager-79966545b7-krksl" (UID: "dec00109-2be0-4153-86df-7ad985b1f396") : secret "metrics-server-cert" not found Dec 05 01:29:32 crc kubenswrapper[4990]: E1205 01:29:32.774161 4990 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 05 01:29:32 crc kubenswrapper[4990]: E1205 01:29:32.774191 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-webhook-certs podName:dec00109-2be0-4153-86df-7ad985b1f396 nodeName:}" failed. No retries permitted until 2025-12-05 01:29:34.774182453 +0000 UTC m=+1273.150397814 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-webhook-certs") pod "openstack-operator-controller-manager-79966545b7-krksl" (UID: "dec00109-2be0-4153-86df-7ad985b1f396") : secret "webhook-server-cert" not found Dec 05 01:29:32 crc kubenswrapper[4990]: I1205 01:29:32.781324 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-88qgd" event={"ID":"b7da1eba-30c5-45a4-819d-7aef2af480c8","Type":"ContainerStarted","Data":"93c86750825f675e80a88fb1b4bc8ea2b73b909b664cfc18ae7ae8e40046c7df"} Dec 05 01:29:32 crc kubenswrapper[4990]: I1205 01:29:32.785137 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-mv9jg" event={"ID":"307385f4-34c6-473a-a3d6-c0be9a334b68","Type":"ContainerStarted","Data":"111ce737249538abf8819806a588ad01b00d2f0b13b2a74ea7fb8f37ea5ccea1"} Dec 05 01:29:32 crc kubenswrapper[4990]: E1205 01:29:32.785358 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-88qgd" podUID="b7da1eba-30c5-45a4-819d-7aef2af480c8" Dec 05 01:29:32 crc kubenswrapper[4990]: E1205 01:29:32.791849 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-mv9jg" podUID="307385f4-34c6-473a-a3d6-c0be9a334b68" Dec 05 01:29:32 crc kubenswrapper[4990]: I1205 01:29:32.792158 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-sqqtl" event={"ID":"55e02f15-1f53-4eb9-84fb-61a260485ebf","Type":"ContainerStarted","Data":"b1545e194279c5db37abb499d4e99a68a6aacf25b7d1c9c6dc2777cd08f24c11"} Dec 05 01:29:32 crc kubenswrapper[4990]: E1205 01:29:32.794625 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-sqqtl" podUID="55e02f15-1f53-4eb9-84fb-61a260485ebf" Dec 05 01:29:32 crc kubenswrapper[4990]: I1205 01:29:32.795811 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-4fl28" event={"ID":"683b019b-d147-4c85-b537-e4000a14dfed","Type":"ContainerStarted","Data":"f3083efe3bd5f529621d5416bc50b8f6c420ec54abc73c5993664de2a3aa7b42"} Dec 05 01:29:32 crc kubenswrapper[4990]: E1205 01:29:32.799326 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-4fl28" podUID="683b019b-d147-4c85-b537-e4000a14dfed" Dec 05 01:29:33 crc kubenswrapper[4990]: E1205 01:29:33.811017 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6g29k" podUID="4aeaf936-39c4-4558-bba8-c47839e79431" Dec 05 01:29:33 crc kubenswrapper[4990]: E1205 01:29:33.813122 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-88qgd" podUID="b7da1eba-30c5-45a4-819d-7aef2af480c8" Dec 05 01:29:33 crc kubenswrapper[4990]: E1205 01:29:33.813319 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-sqqtl" podUID="55e02f15-1f53-4eb9-84fb-61a260485ebf" Dec 05 01:29:33 crc kubenswrapper[4990]: E1205 01:29:33.813468 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-gc6bg" podUID="b80a8ebf-4453-4f97-9bc3-9c3d8371b868" Dec 05 01:29:33 crc kubenswrapper[4990]: E1205 01:29:33.813548 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-jfnfg" podUID="4879650d-849b-496e-b8de-92dde4a62982" Dec 05 01:29:33 crc kubenswrapper[4990]: E1205 01:29:33.813702 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-mv9jg" podUID="307385f4-34c6-473a-a3d6-c0be9a334b68" Dec 05 01:29:33 crc kubenswrapper[4990]: E1205 01:29:33.831963 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-4fl28" podUID="683b019b-d147-4c85-b537-e4000a14dfed" Dec 05 01:29:34 crc kubenswrapper[4990]: I1205 01:29:34.092361 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9-cert\") pod \"infra-operator-controller-manager-57548d458d-b47qg\" (UID: \"8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-b47qg" Dec 05 01:29:34 crc kubenswrapper[4990]: E1205 01:29:34.092581 4990 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 05 01:29:34 crc kubenswrapper[4990]: E1205 01:29:34.092629 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9-cert podName:8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9 nodeName:}" failed. No retries permitted until 2025-12-05 01:29:38.092613643 +0000 UTC m=+1276.468829004 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9-cert") pod "infra-operator-controller-manager-57548d458d-b47qg" (UID: "8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9") : secret "infra-operator-webhook-server-cert" not found Dec 05 01:29:34 crc kubenswrapper[4990]: I1205 01:29:34.497828 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58d8a7d1-7337-4d4f-ae63-04862be6a86a-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt\" (UID: \"58d8a7d1-7337-4d4f-ae63-04862be6a86a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt" Dec 05 01:29:34 crc kubenswrapper[4990]: E1205 01:29:34.498227 4990 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 01:29:34 crc kubenswrapper[4990]: E1205 01:29:34.498318 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58d8a7d1-7337-4d4f-ae63-04862be6a86a-cert podName:58d8a7d1-7337-4d4f-ae63-04862be6a86a nodeName:}" failed. No retries permitted until 2025-12-05 01:29:38.498303575 +0000 UTC m=+1276.874518936 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58d8a7d1-7337-4d4f-ae63-04862be6a86a-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt" (UID: "58d8a7d1-7337-4d4f-ae63-04862be6a86a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 01:29:34 crc kubenswrapper[4990]: I1205 01:29:34.803851 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-webhook-certs\") pod \"openstack-operator-controller-manager-79966545b7-krksl\" (UID: \"dec00109-2be0-4153-86df-7ad985b1f396\") " pod="openstack-operators/openstack-operator-controller-manager-79966545b7-krksl" Dec 05 01:29:34 crc kubenswrapper[4990]: E1205 01:29:34.804056 4990 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 05 01:29:34 crc kubenswrapper[4990]: E1205 01:29:34.804152 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-webhook-certs podName:dec00109-2be0-4153-86df-7ad985b1f396 nodeName:}" failed. No retries permitted until 2025-12-05 01:29:38.804127802 +0000 UTC m=+1277.180343253 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-webhook-certs") pod "openstack-operator-controller-manager-79966545b7-krksl" (UID: "dec00109-2be0-4153-86df-7ad985b1f396") : secret "webhook-server-cert" not found Dec 05 01:29:34 crc kubenswrapper[4990]: I1205 01:29:34.804558 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-metrics-certs\") pod \"openstack-operator-controller-manager-79966545b7-krksl\" (UID: \"dec00109-2be0-4153-86df-7ad985b1f396\") " pod="openstack-operators/openstack-operator-controller-manager-79966545b7-krksl" Dec 05 01:29:34 crc kubenswrapper[4990]: E1205 01:29:34.804725 4990 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 05 01:29:34 crc kubenswrapper[4990]: E1205 01:29:34.804793 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-metrics-certs podName:dec00109-2be0-4153-86df-7ad985b1f396 nodeName:}" failed. No retries permitted until 2025-12-05 01:29:38.804775071 +0000 UTC m=+1277.180990512 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-metrics-certs") pod "openstack-operator-controller-manager-79966545b7-krksl" (UID: "dec00109-2be0-4153-86df-7ad985b1f396") : secret "metrics-server-cert" not found Dec 05 01:29:38 crc kubenswrapper[4990]: I1205 01:29:38.156408 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9-cert\") pod \"infra-operator-controller-manager-57548d458d-b47qg\" (UID: \"8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-b47qg" Dec 05 01:29:38 crc kubenswrapper[4990]: E1205 01:29:38.156613 4990 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 05 01:29:38 crc kubenswrapper[4990]: E1205 01:29:38.156979 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9-cert podName:8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9 nodeName:}" failed. No retries permitted until 2025-12-05 01:29:46.156957026 +0000 UTC m=+1284.533172397 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9-cert") pod "infra-operator-controller-manager-57548d458d-b47qg" (UID: "8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9") : secret "infra-operator-webhook-server-cert" not found Dec 05 01:29:38 crc kubenswrapper[4990]: I1205 01:29:38.562679 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58d8a7d1-7337-4d4f-ae63-04862be6a86a-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt\" (UID: \"58d8a7d1-7337-4d4f-ae63-04862be6a86a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt" Dec 05 01:29:38 crc kubenswrapper[4990]: E1205 01:29:38.562841 4990 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 01:29:38 crc kubenswrapper[4990]: E1205 01:29:38.562897 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58d8a7d1-7337-4d4f-ae63-04862be6a86a-cert podName:58d8a7d1-7337-4d4f-ae63-04862be6a86a nodeName:}" failed. No retries permitted until 2025-12-05 01:29:46.562879054 +0000 UTC m=+1284.939094425 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58d8a7d1-7337-4d4f-ae63-04862be6a86a-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt" (UID: "58d8a7d1-7337-4d4f-ae63-04862be6a86a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 01:29:38 crc kubenswrapper[4990]: I1205 01:29:38.866761 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-webhook-certs\") pod \"openstack-operator-controller-manager-79966545b7-krksl\" (UID: \"dec00109-2be0-4153-86df-7ad985b1f396\") " pod="openstack-operators/openstack-operator-controller-manager-79966545b7-krksl" Dec 05 01:29:38 crc kubenswrapper[4990]: I1205 01:29:38.866843 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-metrics-certs\") pod \"openstack-operator-controller-manager-79966545b7-krksl\" (UID: \"dec00109-2be0-4153-86df-7ad985b1f396\") " pod="openstack-operators/openstack-operator-controller-manager-79966545b7-krksl" Dec 05 01:29:38 crc kubenswrapper[4990]: E1205 01:29:38.866977 4990 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 05 01:29:38 crc kubenswrapper[4990]: E1205 01:29:38.867054 4990 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 05 01:29:38 crc kubenswrapper[4990]: E1205 01:29:38.867059 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-webhook-certs podName:dec00109-2be0-4153-86df-7ad985b1f396 nodeName:}" failed. No retries permitted until 2025-12-05 01:29:46.867037964 +0000 UTC m=+1285.243253335 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-webhook-certs") pod "openstack-operator-controller-manager-79966545b7-krksl" (UID: "dec00109-2be0-4153-86df-7ad985b1f396") : secret "webhook-server-cert" not found Dec 05 01:29:38 crc kubenswrapper[4990]: E1205 01:29:38.867146 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-metrics-certs podName:dec00109-2be0-4153-86df-7ad985b1f396 nodeName:}" failed. No retries permitted until 2025-12-05 01:29:46.867129246 +0000 UTC m=+1285.243344607 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-metrics-certs") pod "openstack-operator-controller-manager-79966545b7-krksl" (UID: "dec00109-2be0-4153-86df-7ad985b1f396") : secret "metrics-server-cert" not found Dec 05 01:29:44 crc kubenswrapper[4990]: E1205 01:29:44.721405 4990 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" Dec 05 01:29:44 crc kubenswrapper[4990]: E1205 01:29:44.722461 4990 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5kz2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-lr2g9_openstack-operators(9d1a9c70-0d24-476f-a857-b06e637e24b5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 01:29:45 crc kubenswrapper[4990]: E1205 01:29:45.184644 4990 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mxnxd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-78b4bc895b-t29m4_openstack-operators(63a6f5c3-f437-478f-b72c-afcae7a4dba8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 01:29:45 crc kubenswrapper[4990]: E1205 01:29:45.186584 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-t29m4" podUID="63a6f5c3-f437-478f-b72c-afcae7a4dba8" Dec 05 01:29:45 crc kubenswrapper[4990]: E1205 01:29:45.199944 4990 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-glkrj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-h8nnl_openstack-operators(f54f5881-49fa-4cfa-88d9-20d0b0d9c082): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 01:29:45 crc kubenswrapper[4990]: E1205 01:29:45.201127 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-h8nnl" podUID="f54f5881-49fa-4cfa-88d9-20d0b0d9c082" Dec 05 01:29:45 crc kubenswrapper[4990]: E1205 01:29:45.216537 4990 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ntd52,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-mhqsp_openstack-operators(a25b5669-b148-428b-a654-4a1effd836f5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 01:29:45 crc kubenswrapper[4990]: E1205 01:29:45.220668 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-mhqsp" podUID="a25b5669-b148-428b-a654-4a1effd836f5" Dec 05 01:29:45 crc kubenswrapper[4990]: I1205 01:29:45.923225 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-h8nnl" event={"ID":"f54f5881-49fa-4cfa-88d9-20d0b0d9c082","Type":"ContainerStarted","Data":"821907a847412b2f601f8de40a46e709b7972635e7ce8da4dc821c9e62cb70b3"} Dec 05 01:29:45 crc kubenswrapper[4990]: I1205 01:29:45.923938 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-h8nnl" Dec 05 01:29:45 crc kubenswrapper[4990]: E1205 01:29:45.928842 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-h8nnl" podUID="f54f5881-49fa-4cfa-88d9-20d0b0d9c082" Dec 05 01:29:45 crc kubenswrapper[4990]: E1205 01:29:45.934659 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-mhqsp" podUID="a25b5669-b148-428b-a654-4a1effd836f5" Dec 05 01:29:45 crc kubenswrapper[4990]: I1205 01:29:45.940942 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-mhqsp" Dec 05 01:29:45 crc kubenswrapper[4990]: I1205 01:29:45.940976 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-mhqsp" event={"ID":"a25b5669-b148-428b-a654-4a1effd836f5","Type":"ContainerStarted","Data":"775572d6c88f7c95639931a23226f65069c6961c98c9c96e87b68557f549a825"} Dec 05 01:29:45 crc kubenswrapper[4990]: I1205 01:29:45.940991 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-6fgdz" event={"ID":"e908e515-9470-4a27-912f-a266a4ffe3a9","Type":"ContainerStarted","Data":"60577810e9bb801f4f284608574603c2c33129d3f68a1cf3b9649e524cd66ee4"} Dec 05 01:29:45 crc kubenswrapper[4990]: I1205 01:29:45.942309 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9mdd5" event={"ID":"f6574447-fe34-4fc6-a99d-8f9898a73019","Type":"ContainerStarted","Data":"cb5b35f3b84a7549fce3a82eac29f09e96c3aca1855b047ac247ebcb8379ffd8"} Dec 05 01:29:45 crc kubenswrapper[4990]: I1205 01:29:45.946921 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-p8zs5" event={"ID":"45e4a6f1-ff34-4e14-8a58-c3e88d998169","Type":"ContainerStarted","Data":"ccffe839cb4658ac59906d8b80de5aaf5fb3bdf9630684dec97033a41686dc70"} Dec 05 01:29:45 crc kubenswrapper[4990]: I1205 01:29:45.948541 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-k7zzp" event={"ID":"4ac7ce06-d864-4577-a628-201945f57f8a","Type":"ContainerStarted","Data":"9bdebd2d2d2adb4740307678a87e53acd950701257519648061e3a5f27a2a140"} Dec 05 01:29:45 crc kubenswrapper[4990]: I1205 01:29:45.958707 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-t42g2" event={"ID":"a95995a7-92e3-40c0-8fad-30e47ea759e1","Type":"ContainerStarted","Data":"4de44a8ee6ade2eee5c9734249f4a508d7442794a5205db43b40ec0aba2ec174"} Dec 05 01:29:45 crc kubenswrapper[4990]: I1205 01:29:45.964093 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-lfhcj" event={"ID":"bd1e0999-c2d5-4712-b995-18e7778231cf","Type":"ContainerStarted","Data":"f720a21c39e1c8fb7c813adec7f939a45f319431627d7ab6294003c76317dcfc"} Dec 05 01:29:45 crc kubenswrapper[4990]: I1205 01:29:45.966897 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-t29m4" event={"ID":"63a6f5c3-f437-478f-b72c-afcae7a4dba8","Type":"ContainerStarted","Data":"2fdada85818416331c91db6cac7731927d037e017722c46bd738764116aa9896"} Dec 05 01:29:45 crc kubenswrapper[4990]: I1205 01:29:45.968275 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-t29m4" Dec 05 01:29:45 crc kubenswrapper[4990]: E1205 01:29:45.968522 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-t29m4" podUID="63a6f5c3-f437-478f-b72c-afcae7a4dba8" Dec 05 01:29:45 crc kubenswrapper[4990]: I1205 01:29:45.970104 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-fqcx2" event={"ID":"53d5cea9-4a5f-4663-8511-4e830d5c86bc","Type":"ContainerStarted","Data":"533a4280b0079ea0d053676e2fdfb5ab8cc6dd62678264c9a61d40b5ba5fd0d0"} Dec 05 01:29:45 crc kubenswrapper[4990]: I1205 01:29:45.972635 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wk9np" event={"ID":"438c87d1-af5c-42ee-988c-82d88ebd6439","Type":"ContainerStarted","Data":"6bc711e0fac642ca02140788fe4615c6908c6e57fabcebaa46ea522d55e4cbcb"} Dec 05 01:29:45 crc kubenswrapper[4990]: I1205 01:29:45.974371 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-wgzhs" event={"ID":"c19a83c4-e130-47a2-81d2-04dbea61d6c1","Type":"ContainerStarted","Data":"b1547bc0d87bd4f68cb789203df6e18b8c269e7a1e957e2c7fb36ccd31de83d5"} Dec 05 01:29:46 crc kubenswrapper[4990]: I1205 01:29:46.203276 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9-cert\") pod \"infra-operator-controller-manager-57548d458d-b47qg\" (UID: \"8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-b47qg" Dec 05 01:29:46 crc kubenswrapper[4990]: I1205 01:29:46.215917 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9-cert\") pod \"infra-operator-controller-manager-57548d458d-b47qg\" (UID: \"8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-b47qg" Dec 05 01:29:46 crc kubenswrapper[4990]: I1205 01:29:46.442135 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-b47qg" Dec 05 01:29:46 crc kubenswrapper[4990]: I1205 01:29:46.606970 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58d8a7d1-7337-4d4f-ae63-04862be6a86a-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt\" (UID: \"58d8a7d1-7337-4d4f-ae63-04862be6a86a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt" Dec 05 01:29:46 crc kubenswrapper[4990]: I1205 01:29:46.632077 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58d8a7d1-7337-4d4f-ae63-04862be6a86a-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt\" (UID: \"58d8a7d1-7337-4d4f-ae63-04862be6a86a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt" Dec 05 01:29:46 crc kubenswrapper[4990]: I1205 01:29:46.675060 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt" Dec 05 01:29:46 crc kubenswrapper[4990]: I1205 01:29:46.910615 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-webhook-certs\") pod \"openstack-operator-controller-manager-79966545b7-krksl\" (UID: \"dec00109-2be0-4153-86df-7ad985b1f396\") " pod="openstack-operators/openstack-operator-controller-manager-79966545b7-krksl" Dec 05 01:29:46 crc kubenswrapper[4990]: I1205 01:29:46.910673 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-metrics-certs\") pod \"openstack-operator-controller-manager-79966545b7-krksl\" (UID: \"dec00109-2be0-4153-86df-7ad985b1f396\") " pod="openstack-operators/openstack-operator-controller-manager-79966545b7-krksl" Dec 05 01:29:46 crc kubenswrapper[4990]: I1205 01:29:46.915156 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-metrics-certs\") pod \"openstack-operator-controller-manager-79966545b7-krksl\" (UID: \"dec00109-2be0-4153-86df-7ad985b1f396\") " pod="openstack-operators/openstack-operator-controller-manager-79966545b7-krksl" Dec 05 01:29:46 crc kubenswrapper[4990]: I1205 01:29:46.924116 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dec00109-2be0-4153-86df-7ad985b1f396-webhook-certs\") pod \"openstack-operator-controller-manager-79966545b7-krksl\" (UID: \"dec00109-2be0-4153-86df-7ad985b1f396\") " pod="openstack-operators/openstack-operator-controller-manager-79966545b7-krksl" Dec 05 01:29:46 crc kubenswrapper[4990]: E1205 01:29:46.983297 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-h8nnl" podUID="f54f5881-49fa-4cfa-88d9-20d0b0d9c082" Dec 05 01:29:46 crc kubenswrapper[4990]: E1205 01:29:46.983673 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-mhqsp" podUID="a25b5669-b148-428b-a654-4a1effd836f5" Dec 05 01:29:46 crc kubenswrapper[4990]: E1205 01:29:46.984410 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-t29m4" podUID="63a6f5c3-f437-478f-b72c-afcae7a4dba8" Dec 05 01:29:46 crc kubenswrapper[4990]: I1205 01:29:46.997423 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-79966545b7-krksl" Dec 05 01:29:47 crc kubenswrapper[4990]: I1205 01:29:47.560286 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt"] Dec 05 01:29:47 crc kubenswrapper[4990]: I1205 01:29:47.660691 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-b47qg"] Dec 05 01:29:47 crc kubenswrapper[4990]: I1205 01:29:47.838175 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-79966545b7-krksl"] Dec 05 01:29:50 crc kubenswrapper[4990]: I1205 01:29:50.522268 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-h8nnl" Dec 05 01:29:50 crc kubenswrapper[4990]: E1205 01:29:50.526160 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-h8nnl" podUID="f54f5881-49fa-4cfa-88d9-20d0b0d9c082" Dec 05 01:29:50 crc kubenswrapper[4990]: I1205 01:29:50.740034 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-t29m4" Dec 05 01:29:50 crc kubenswrapper[4990]: E1205 01:29:50.743717 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-t29m4" podUID="63a6f5c3-f437-478f-b72c-afcae7a4dba8" Dec 05 01:29:50 crc kubenswrapper[4990]: I1205 01:29:50.950112 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-mhqsp" Dec 05 01:29:50 crc kubenswrapper[4990]: E1205 01:29:50.952574 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-mhqsp" podUID="a25b5669-b148-428b-a654-4a1effd836f5" Dec 05 01:29:55 crc kubenswrapper[4990]: W1205 01:29:55.213828 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddec00109_2be0_4153_86df_7ad985b1f396.slice/crio-594c644d07582fea6f3973dbe8be194432445e0ec38700fc9de0271bc86642b5 WatchSource:0}: Error finding container 594c644d07582fea6f3973dbe8be194432445e0ec38700fc9de0271bc86642b5: Status 404 returned error can't find the container with id 594c644d07582fea6f3973dbe8be194432445e0ec38700fc9de0271bc86642b5 Dec 05 01:29:55 crc kubenswrapper[4990]: W1205 01:29:55.788424 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58d8a7d1_7337_4d4f_ae63_04862be6a86a.slice/crio-794faef08f7cc2eece8a96d862a241fab0459d72c9107806fac9f3f4cfcbd898 WatchSource:0}: Error finding container 794faef08f7cc2eece8a96d862a241fab0459d72c9107806fac9f3f4cfcbd898: Status 404 returned error can't find the container with id 794faef08f7cc2eece8a96d862a241fab0459d72c9107806fac9f3f4cfcbd898 Dec 05 01:29:56 crc kubenswrapper[4990]: I1205 01:29:56.064137 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt" event={"ID":"58d8a7d1-7337-4d4f-ae63-04862be6a86a","Type":"ContainerStarted","Data":"794faef08f7cc2eece8a96d862a241fab0459d72c9107806fac9f3f4cfcbd898"} Dec 05 01:29:56 crc kubenswrapper[4990]: I1205 01:29:56.065693 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-79966545b7-krksl" event={"ID":"dec00109-2be0-4153-86df-7ad985b1f396","Type":"ContainerStarted","Data":"594c644d07582fea6f3973dbe8be194432445e0ec38700fc9de0271bc86642b5"} Dec 05 01:29:56 crc kubenswrapper[4990]: W1205 01:29:56.240387 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d7cb9a8_32ec_40ef_8504_8f280c6ad2e9.slice/crio-4985f6176431ad322ce17c9ef0ee2df6073b03c6220804fa25bdbca869f46d37 WatchSource:0}: Error finding container 4985f6176431ad322ce17c9ef0ee2df6073b03c6220804fa25bdbca869f46d37: Status 404 returned error can't find the container with id 4985f6176431ad322ce17c9ef0ee2df6073b03c6220804fa25bdbca869f46d37 Dec 05 01:29:57 crc kubenswrapper[4990]: I1205 01:29:57.073267 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-b47qg" event={"ID":"8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9","Type":"ContainerStarted","Data":"4985f6176431ad322ce17c9ef0ee2df6073b03c6220804fa25bdbca869f46d37"} Dec 05 01:30:00 crc kubenswrapper[4990]: I1205 01:30:00.139559 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414970-m99cr"] Dec 05 01:30:00 crc kubenswrapper[4990]: I1205 01:30:00.141668 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414970-m99cr" Dec 05 01:30:00 crc kubenswrapper[4990]: I1205 01:30:00.144791 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 01:30:00 crc kubenswrapper[4990]: I1205 01:30:00.146046 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 01:30:00 crc kubenswrapper[4990]: I1205 01:30:00.160089 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414970-m99cr"] Dec 05 01:30:00 crc kubenswrapper[4990]: I1205 01:30:00.222102 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec540dc8-1600-4d52-9eec-ae9358ff0277-config-volume\") pod \"collect-profiles-29414970-m99cr\" (UID: \"ec540dc8-1600-4d52-9eec-ae9358ff0277\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414970-m99cr" Dec 05 01:30:00 crc kubenswrapper[4990]: I1205 01:30:00.222182 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec540dc8-1600-4d52-9eec-ae9358ff0277-secret-volume\") pod \"collect-profiles-29414970-m99cr\" (UID: \"ec540dc8-1600-4d52-9eec-ae9358ff0277\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414970-m99cr" Dec 05 01:30:00 crc kubenswrapper[4990]: I1205 01:30:00.222329 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r9lp\" (UniqueName: \"kubernetes.io/projected/ec540dc8-1600-4d52-9eec-ae9358ff0277-kube-api-access-2r9lp\") pod \"collect-profiles-29414970-m99cr\" (UID: \"ec540dc8-1600-4d52-9eec-ae9358ff0277\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414970-m99cr" Dec 05 01:30:00 crc kubenswrapper[4990]: I1205 01:30:00.324827 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec540dc8-1600-4d52-9eec-ae9358ff0277-config-volume\") pod \"collect-profiles-29414970-m99cr\" (UID: \"ec540dc8-1600-4d52-9eec-ae9358ff0277\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414970-m99cr" Dec 05 01:30:00 crc kubenswrapper[4990]: I1205 01:30:00.324904 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec540dc8-1600-4d52-9eec-ae9358ff0277-secret-volume\") pod \"collect-profiles-29414970-m99cr\" (UID: \"ec540dc8-1600-4d52-9eec-ae9358ff0277\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414970-m99cr" Dec 05 01:30:00 crc kubenswrapper[4990]: I1205 01:30:00.325056 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r9lp\" (UniqueName: \"kubernetes.io/projected/ec540dc8-1600-4d52-9eec-ae9358ff0277-kube-api-access-2r9lp\") pod \"collect-profiles-29414970-m99cr\" (UID: \"ec540dc8-1600-4d52-9eec-ae9358ff0277\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414970-m99cr" Dec 05 01:30:00 crc kubenswrapper[4990]: I1205 01:30:00.326361 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec540dc8-1600-4d52-9eec-ae9358ff0277-config-volume\") pod \"collect-profiles-29414970-m99cr\" (UID: \"ec540dc8-1600-4d52-9eec-ae9358ff0277\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414970-m99cr" Dec 05 01:30:00 crc kubenswrapper[4990]: I1205 01:30:00.335613 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec540dc8-1600-4d52-9eec-ae9358ff0277-secret-volume\") pod \"collect-profiles-29414970-m99cr\" (UID: \"ec540dc8-1600-4d52-9eec-ae9358ff0277\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414970-m99cr" Dec 05 01:30:00 crc kubenswrapper[4990]: I1205 01:30:00.354398 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r9lp\" (UniqueName: \"kubernetes.io/projected/ec540dc8-1600-4d52-9eec-ae9358ff0277-kube-api-access-2r9lp\") pod \"collect-profiles-29414970-m99cr\" (UID: \"ec540dc8-1600-4d52-9eec-ae9358ff0277\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414970-m99cr" Dec 05 01:30:00 crc kubenswrapper[4990]: I1205 01:30:00.474827 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414970-m99cr" Dec 05 01:30:01 crc kubenswrapper[4990]: E1205 01:30:01.018797 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-lr2g9" podUID="9d1a9c70-0d24-476f-a857-b06e637e24b5" Dec 05 01:30:01 crc kubenswrapper[4990]: I1205 01:30:01.053191 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414970-m99cr"] Dec 05 01:30:01 crc kubenswrapper[4990]: I1205 01:30:01.107723 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-88qgd" event={"ID":"b7da1eba-30c5-45a4-819d-7aef2af480c8","Type":"ContainerStarted","Data":"2d70612b0c7debd334a0a52c1a0dd4439a38896f8c4ad0d112c4f6724dd34056"} Dec 05 01:30:01 crc kubenswrapper[4990]: I1205 01:30:01.112573 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-sqqtl" event={"ID":"55e02f15-1f53-4eb9-84fb-61a260485ebf","Type":"ContainerStarted","Data":"0b3966f9205dce8d68c20a1b4e673454f535c85e515c147cd6754563bd3804a5"} Dec 05 01:30:01 crc kubenswrapper[4990]: I1205 01:30:01.113434 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-mv9jg" event={"ID":"307385f4-34c6-473a-a3d6-c0be9a334b68","Type":"ContainerStarted","Data":"f0b47db56bcef19b6dea63dc2f2a4117fb81be105e44fbadd093f077b26c7424"} Dec 05 01:30:01 crc kubenswrapper[4990]: I1205 01:30:01.126044 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-4fl28" event={"ID":"683b019b-d147-4c85-b537-e4000a14dfed","Type":"ContainerStarted","Data":"c230354b2acd1b534aa74041749b73470ab9ee88333c22a2bdee0c5dcb444e96"} Dec 05 01:30:01 crc kubenswrapper[4990]: I1205 01:30:01.130819 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-b47qg" event={"ID":"8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9","Type":"ContainerStarted","Data":"f7ef67f0b75fec313d475079e5e23e4b3f4a1f804013492f897961d4ae08695d"} Dec 05 01:30:01 crc kubenswrapper[4990]: I1205 01:30:01.170322 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414970-m99cr" event={"ID":"ec540dc8-1600-4d52-9eec-ae9358ff0277","Type":"ContainerStarted","Data":"06e27329130cca557d338c05c7abd5513b701ca65554b1ac23f757121291a11d"} Dec 05 01:30:01 crc kubenswrapper[4990]: I1205 01:30:01.185656 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wk9np" Dec 05 01:30:01 crc kubenswrapper[4990]: I1205 01:30:01.194173 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wk9np" Dec 05 01:30:01 crc kubenswrapper[4990]: I1205 01:30:01.201561 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-jfnfg" event={"ID":"4879650d-849b-496e-b8de-92dde4a62982","Type":"ContainerStarted","Data":"8f68ccd2daec35d1972ce28df45ad28a4576e9dba0df4041ca496531a3f66cd2"} Dec 05 01:30:01 crc kubenswrapper[4990]: I1205 01:30:01.213812 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wk9np" podStartSLOduration=2.479409399 podStartE2EDuration="31.213796888s" podCreationTimestamp="2025-12-05 01:29:30 +0000 UTC" firstStartedPulling="2025-12-05 01:29:31.794586808 +0000 UTC m=+1270.170802169" lastFinishedPulling="2025-12-05 01:30:00.528974297 +0000 UTC m=+1298.905189658" observedRunningTime="2025-12-05 01:30:01.208855808 +0000 UTC m=+1299.585071169" watchObservedRunningTime="2025-12-05 01:30:01.213796888 +0000 UTC m=+1299.590012249" Dec 05 01:30:01 crc kubenswrapper[4990]: I1205 01:30:01.226025 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-gc6bg" event={"ID":"b80a8ebf-4453-4f97-9bc3-9c3d8371b868","Type":"ContainerStarted","Data":"56e95436118e40a2c0156469aae52e6afa77dcef1ab3893ca096252cf8a0c942"} Dec 05 01:30:01 crc kubenswrapper[4990]: I1205 01:30:01.245490 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-lr2g9" event={"ID":"9d1a9c70-0d24-476f-a857-b06e637e24b5","Type":"ContainerStarted","Data":"7b023dd5f1b8a4b513af374ed4d629be470f9b0ceba4346ba423dada93033480"} Dec 05 01:30:01 crc kubenswrapper[4990]: I1205 01:30:01.265262 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-lfhcj" event={"ID":"bd1e0999-c2d5-4712-b995-18e7778231cf","Type":"ContainerStarted","Data":"4a77b5626e5a0652e19ad6b364b3cf9a1ac5030fc2a3a0d6157fe70f89137641"} Dec 05 01:30:01 crc kubenswrapper[4990]: I1205 01:30:01.266192 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-lfhcj" Dec 05 01:30:01 crc kubenswrapper[4990]: I1205 01:30:01.282356 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-lfhcj" Dec 05 01:30:01 crc kubenswrapper[4990]: I1205 01:30:01.282627 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-79966545b7-krksl" event={"ID":"dec00109-2be0-4153-86df-7ad985b1f396","Type":"ContainerStarted","Data":"420e7274774f8bb9f03b9192f389ddfa4fab36933a194aea36b49b8010325fab"} Dec 05 01:30:01 crc kubenswrapper[4990]: I1205 01:30:01.283265 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-79966545b7-krksl" Dec 05 01:30:01 crc kubenswrapper[4990]: I1205 01:30:01.311629 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-lfhcj" podStartSLOduration=5.096798466 podStartE2EDuration="31.311610104s" podCreationTimestamp="2025-12-05 01:29:30 +0000 UTC" firstStartedPulling="2025-12-05 01:29:31.890456458 +0000 UTC m=+1270.266671819" lastFinishedPulling="2025-12-05 01:29:58.105268096 +0000 UTC m=+1296.481483457" observedRunningTime="2025-12-05 01:30:01.307526198 +0000 UTC m=+1299.683741559" watchObservedRunningTime="2025-12-05 01:30:01.311610104 +0000 UTC m=+1299.687825465" Dec 05 01:30:01 crc kubenswrapper[4990]: I1205 01:30:01.370504 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-79966545b7-krksl" podStartSLOduration=31.370473104 podStartE2EDuration="31.370473104s" podCreationTimestamp="2025-12-05 01:29:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:30:01.366767999 +0000 UTC m=+1299.742983360" watchObservedRunningTime="2025-12-05 01:30:01.370473104 +0000 UTC m=+1299.746688465" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.291097 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-p8zs5" event={"ID":"45e4a6f1-ff34-4e14-8a58-c3e88d998169","Type":"ContainerStarted","Data":"2fb203fff59200bc05f67d6ae7155c7c26c0f3a5d92525e1271881493c654315"} Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.291876 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-p8zs5" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.294722 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-gc6bg" event={"ID":"b80a8ebf-4453-4f97-9bc3-9c3d8371b868","Type":"ContainerStarted","Data":"32c69e1e88aedc08b9879ad9c9b229052ccf2082e56e16c49328447ed90655ce"} Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.294813 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-gc6bg" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.296419 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-jfnfg" event={"ID":"4879650d-849b-496e-b8de-92dde4a62982","Type":"ContainerStarted","Data":"45347480f6c4401b9adaee72a39fc961709a0b6ffd0f2f4332e86f45600f1280"} Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.296578 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-jfnfg" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.297624 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-p8zs5" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.301022 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-t42g2" event={"ID":"a95995a7-92e3-40c0-8fad-30e47ea759e1","Type":"ContainerStarted","Data":"40463904c6f2b94f646ea755c6a4ddf77de2397ae3b7b0d1524501d2dd1d30aa"} Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.301131 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-t42g2" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.303865 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-t42g2" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.306266 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414970-m99cr" event={"ID":"ec540dc8-1600-4d52-9eec-ae9358ff0277","Type":"ContainerStarted","Data":"b9f419a3302b699ba4b83927282d6da97e8fc8e840536d94aee52b07a9aa71fd"} Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.314874 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-p8zs5" podStartSLOduration=3.230071829 podStartE2EDuration="32.31486174s" podCreationTimestamp="2025-12-05 01:29:30 +0000 UTC" firstStartedPulling="2025-12-05 01:29:31.371686529 +0000 UTC m=+1269.747901890" lastFinishedPulling="2025-12-05 01:30:00.45647644 +0000 UTC m=+1298.832691801" observedRunningTime="2025-12-05 01:30:02.313765209 +0000 UTC m=+1300.689980570" watchObservedRunningTime="2025-12-05 01:30:02.31486174 +0000 UTC m=+1300.691077101" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.321775 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-wgzhs" event={"ID":"c19a83c4-e130-47a2-81d2-04dbea61d6c1","Type":"ContainerStarted","Data":"dd9ef9646a57a2f6ee07330f02ab308e6e275eee43fd4cdf8a64b0492d8f8058"} Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.322265 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-wgzhs" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.332682 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-wgzhs" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.337150 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt" event={"ID":"58d8a7d1-7337-4d4f-ae63-04862be6a86a","Type":"ContainerStarted","Data":"789d6a3821d9fd393b14ac4e16b320a89fc4641518bfd487717caf71b9f7bd83"} Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.337180 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt" event={"ID":"58d8a7d1-7337-4d4f-ae63-04862be6a86a","Type":"ContainerStarted","Data":"a38b5fe238289fca1c08582ca96f2e3da00fd7458761f4a13e39d76364530c5e"} Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.337181 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-t42g2" podStartSLOduration=3.088387579 podStartE2EDuration="32.337169173s" podCreationTimestamp="2025-12-05 01:29:30 +0000 UTC" firstStartedPulling="2025-12-05 01:29:31.279665648 +0000 UTC m=+1269.655881009" lastFinishedPulling="2025-12-05 01:30:00.528447242 +0000 UTC m=+1298.904662603" observedRunningTime="2025-12-05 01:30:02.33565035 +0000 UTC m=+1300.711865711" watchObservedRunningTime="2025-12-05 01:30:02.337169173 +0000 UTC m=+1300.713384524" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.337795 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.339412 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-88qgd" event={"ID":"b7da1eba-30c5-45a4-819d-7aef2af480c8","Type":"ContainerStarted","Data":"746f5576ab85eb0822729d6aa7b90cabf38528c86e8bf783341fd0d580608e51"} Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.339586 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-88qgd" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.347155 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-sqqtl" event={"ID":"55e02f15-1f53-4eb9-84fb-61a260485ebf","Type":"ContainerStarted","Data":"115de3473a68e99f4f343ebaff20c26813eab44bca59887633b55d3be4e29138"} Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.347440 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-sqqtl" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.363623 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-k7zzp" event={"ID":"4ac7ce06-d864-4577-a628-201945f57f8a","Type":"ContainerStarted","Data":"2e7f6f970d553db4cd4b266037ea06a440994eef6c26b9aedd89b3526e5de9d7"} Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.364517 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-k7zzp" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.370904 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-k7zzp" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.372003 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-gc6bg" podStartSLOduration=6.246214449 podStartE2EDuration="32.371986121s" podCreationTimestamp="2025-12-05 01:29:30 +0000 UTC" firstStartedPulling="2025-12-05 01:29:31.894589185 +0000 UTC m=+1270.270804546" lastFinishedPulling="2025-12-05 01:29:58.020360857 +0000 UTC m=+1296.396576218" observedRunningTime="2025-12-05 01:30:02.368918874 +0000 UTC m=+1300.745134255" watchObservedRunningTime="2025-12-05 01:30:02.371986121 +0000 UTC m=+1300.748201482" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.397622 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-mhqsp" event={"ID":"a25b5669-b148-428b-a654-4a1effd836f5","Type":"ContainerStarted","Data":"149135b30e8d773b324e46011119b9dab93ed13b64948583872d72705721fdf8"} Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.416330 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-fqcx2" event={"ID":"53d5cea9-4a5f-4663-8511-4e830d5c86bc","Type":"ContainerStarted","Data":"f7f50d622b0dc5698b06e1bbc38334d5a4da557b0e7f56ba15e6540962a963f9"} Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.417241 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-fqcx2" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.432666 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-fqcx2" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.433963 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-jfnfg" podStartSLOduration=3.903419294 podStartE2EDuration="32.433946339s" podCreationTimestamp="2025-12-05 01:29:30 +0000 UTC" firstStartedPulling="2025-12-05 01:29:31.901260424 +0000 UTC m=+1270.277475785" lastFinishedPulling="2025-12-05 01:30:00.431787469 +0000 UTC m=+1298.808002830" observedRunningTime="2025-12-05 01:30:02.4307936 +0000 UTC m=+1300.807008961" watchObservedRunningTime="2025-12-05 01:30:02.433946339 +0000 UTC m=+1300.810161700" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.445713 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6g29k" event={"ID":"4aeaf936-39c4-4558-bba8-c47839e79431","Type":"ContainerStarted","Data":"b45fe26578145e6bc8db2191febabf08744a7b636fd306246b99eff7faca2111"} Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.452056 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29414970-m99cr" podStartSLOduration=2.452039573 podStartE2EDuration="2.452039573s" podCreationTimestamp="2025-12-05 01:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:30:02.4512376 +0000 UTC m=+1300.827452961" watchObservedRunningTime="2025-12-05 01:30:02.452039573 +0000 UTC m=+1300.828254934" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.457305 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-6fgdz" event={"ID":"e908e515-9470-4a27-912f-a266a4ffe3a9","Type":"ContainerStarted","Data":"14a4ee9f08f6dbcbd2bea4771b2e587bca33edc4a24a6a19397178e772a1a8ea"} Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.458234 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-6fgdz" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.461020 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-6fgdz" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.461123 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wk9np" event={"ID":"438c87d1-af5c-42ee-988c-82d88ebd6439","Type":"ContainerStarted","Data":"0ba961764e62358250394634cbbb98e97deb54bf01b648c9b35291489005e59a"} Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.477038 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-lr2g9" event={"ID":"9d1a9c70-0d24-476f-a857-b06e637e24b5","Type":"ContainerStarted","Data":"1862582660c47555bce6b013c475bd409e5655e392abd49081d0d468d387c7cc"} Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.477076 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-lr2g9" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.492135 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9mdd5" event={"ID":"f6574447-fe34-4fc6-a99d-8f9898a73019","Type":"ContainerStarted","Data":"2ed922463424ca3118dbd4df8990b27404a6926cefded4ea4a68d00b8d1c9cc9"} Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.492877 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9mdd5" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.497713 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9mdd5" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.511838 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-mv9jg" event={"ID":"307385f4-34c6-473a-a3d6-c0be9a334b68","Type":"ContainerStarted","Data":"61227d25a17037cd684cb067961f390fa98f54c7f7c207a4e66088389fddd44c"} Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.512461 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-mv9jg" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.524543 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt" podStartSLOduration=27.83292247 podStartE2EDuration="32.52453051s" podCreationTimestamp="2025-12-05 01:29:30 +0000 UTC" firstStartedPulling="2025-12-05 01:29:55.791856055 +0000 UTC m=+1294.168071406" lastFinishedPulling="2025-12-05 01:30:00.483464085 +0000 UTC m=+1298.859679446" observedRunningTime="2025-12-05 01:30:02.524133908 +0000 UTC m=+1300.900349269" watchObservedRunningTime="2025-12-05 01:30:02.52453051 +0000 UTC m=+1300.900745871" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.525071 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-88qgd" podStartSLOduration=4.346357693 podStartE2EDuration="32.525066685s" podCreationTimestamp="2025-12-05 01:29:30 +0000 UTC" firstStartedPulling="2025-12-05 01:29:31.813190406 +0000 UTC m=+1270.189405767" lastFinishedPulling="2025-12-05 01:29:59.991899398 +0000 UTC m=+1298.368114759" observedRunningTime="2025-12-05 01:30:02.483402103 +0000 UTC m=+1300.859617464" watchObservedRunningTime="2025-12-05 01:30:02.525066685 +0000 UTC m=+1300.901282046" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.533441 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-4fl28" event={"ID":"683b019b-d147-4c85-b537-e4000a14dfed","Type":"ContainerStarted","Data":"1e8e689de10035afbdbbf5b73d1b54bc26ce39ffdfec58099bc96759fc4c8842"} Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.534198 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-4fl28" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.552036 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-b47qg" event={"ID":"8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9","Type":"ContainerStarted","Data":"d1261403798cf3aeefbda929c4cfdba7d365788e0a885983f9725a582126629f"} Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.552072 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-b47qg" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.560794 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-k7zzp" podStartSLOduration=3.67312116 podStartE2EDuration="32.560774088s" podCreationTimestamp="2025-12-05 01:29:30 +0000 UTC" firstStartedPulling="2025-12-05 01:29:31.597998629 +0000 UTC m=+1269.974213990" lastFinishedPulling="2025-12-05 01:30:00.485651537 +0000 UTC m=+1298.861866918" observedRunningTime="2025-12-05 01:30:02.550756834 +0000 UTC m=+1300.926972215" watchObservedRunningTime="2025-12-05 01:30:02.560774088 +0000 UTC m=+1300.936989449" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.595142 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-wgzhs" podStartSLOduration=3.66356913 podStartE2EDuration="32.595121783s" podCreationTimestamp="2025-12-05 01:29:30 +0000 UTC" firstStartedPulling="2025-12-05 01:29:31.596880838 +0000 UTC m=+1269.973096199" lastFinishedPulling="2025-12-05 01:30:00.528433451 +0000 UTC m=+1298.904648852" observedRunningTime="2025-12-05 01:30:02.575339581 +0000 UTC m=+1300.951554942" watchObservedRunningTime="2025-12-05 01:30:02.595121783 +0000 UTC m=+1300.971337134" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.610905 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6g29k" podStartSLOduration=4.083677599 podStartE2EDuration="32.61088694s" podCreationTimestamp="2025-12-05 01:29:30 +0000 UTC" firstStartedPulling="2025-12-05 01:29:32.034260028 +0000 UTC m=+1270.410475389" lastFinishedPulling="2025-12-05 01:30:00.561469369 +0000 UTC m=+1298.937684730" observedRunningTime="2025-12-05 01:30:02.600370872 +0000 UTC m=+1300.976586223" watchObservedRunningTime="2025-12-05 01:30:02.61088694 +0000 UTC m=+1300.987102301" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.631018 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-mhqsp" podStartSLOduration=19.59879227 podStartE2EDuration="32.631003971s" podCreationTimestamp="2025-12-05 01:29:30 +0000 UTC" firstStartedPulling="2025-12-05 01:29:31.794039342 +0000 UTC m=+1270.170254703" lastFinishedPulling="2025-12-05 01:29:44.826251033 +0000 UTC m=+1283.202466404" observedRunningTime="2025-12-05 01:30:02.628128159 +0000 UTC m=+1301.004343520" watchObservedRunningTime="2025-12-05 01:30:02.631003971 +0000 UTC m=+1301.007219332" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.669447 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-fqcx2" podStartSLOduration=3.977568229 podStartE2EDuration="32.668829494s" podCreationTimestamp="2025-12-05 01:29:30 +0000 UTC" firstStartedPulling="2025-12-05 01:29:31.791095139 +0000 UTC m=+1270.167310500" lastFinishedPulling="2025-12-05 01:30:00.482356404 +0000 UTC m=+1298.858571765" observedRunningTime="2025-12-05 01:30:02.666011044 +0000 UTC m=+1301.042226405" watchObservedRunningTime="2025-12-05 01:30:02.668829494 +0000 UTC m=+1301.045044855" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.704850 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-sqqtl" podStartSLOduration=6.488095253 podStartE2EDuration="32.704827656s" podCreationTimestamp="2025-12-05 01:29:30 +0000 UTC" firstStartedPulling="2025-12-05 01:29:31.803590523 +0000 UTC m=+1270.179805884" lastFinishedPulling="2025-12-05 01:29:58.020322926 +0000 UTC m=+1296.396538287" observedRunningTime="2025-12-05 01:30:02.696588892 +0000 UTC m=+1301.072804253" watchObservedRunningTime="2025-12-05 01:30:02.704827656 +0000 UTC m=+1301.081043017" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.723779 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-mv9jg" podStartSLOduration=6.656323016 podStartE2EDuration="32.723765023s" podCreationTimestamp="2025-12-05 01:29:30 +0000 UTC" firstStartedPulling="2025-12-05 01:29:32.015476835 +0000 UTC m=+1270.391692196" lastFinishedPulling="2025-12-05 01:29:58.082918842 +0000 UTC m=+1296.459134203" observedRunningTime="2025-12-05 01:30:02.723458214 +0000 UTC m=+1301.099673575" watchObservedRunningTime="2025-12-05 01:30:02.723765023 +0000 UTC m=+1301.099980384" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.752197 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-6fgdz" podStartSLOduration=4.150036153 podStartE2EDuration="32.752179409s" podCreationTimestamp="2025-12-05 01:29:30 +0000 UTC" firstStartedPulling="2025-12-05 01:29:31.39110001 +0000 UTC m=+1269.767315371" lastFinishedPulling="2025-12-05 01:29:59.993243256 +0000 UTC m=+1298.369458627" observedRunningTime="2025-12-05 01:30:02.749665388 +0000 UTC m=+1301.125880749" watchObservedRunningTime="2025-12-05 01:30:02.752179409 +0000 UTC m=+1301.128394770" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.790889 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-lr2g9" podStartSLOduration=2.4736472259999998 podStartE2EDuration="32.790872427s" podCreationTimestamp="2025-12-05 01:29:30 +0000 UTC" firstStartedPulling="2025-12-05 01:29:31.526150091 +0000 UTC m=+1269.902365452" lastFinishedPulling="2025-12-05 01:30:01.843375302 +0000 UTC m=+1300.219590653" observedRunningTime="2025-12-05 01:30:02.772196777 +0000 UTC m=+1301.148412138" watchObservedRunningTime="2025-12-05 01:30:02.790872427 +0000 UTC m=+1301.167087788" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.850377 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-4fl28" podStartSLOduration=6.658960732 podStartE2EDuration="32.850358795s" podCreationTimestamp="2025-12-05 01:29:30 +0000 UTC" firstStartedPulling="2025-12-05 01:29:31.892215068 +0000 UTC m=+1270.268430429" lastFinishedPulling="2025-12-05 01:29:58.083613131 +0000 UTC m=+1296.459828492" observedRunningTime="2025-12-05 01:30:02.847735361 +0000 UTC m=+1301.223950722" watchObservedRunningTime="2025-12-05 01:30:02.850358795 +0000 UTC m=+1301.226574156" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.852779 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-9mdd5" podStartSLOduration=3.944387118 podStartE2EDuration="32.852744333s" podCreationTimestamp="2025-12-05 01:29:30 +0000 UTC" firstStartedPulling="2025-12-05 01:29:31.629240786 +0000 UTC m=+1270.005456147" lastFinishedPulling="2025-12-05 01:30:00.537598011 +0000 UTC m=+1298.913813362" observedRunningTime="2025-12-05 01:30:02.827198368 +0000 UTC m=+1301.203413739" watchObservedRunningTime="2025-12-05 01:30:02.852744333 +0000 UTC m=+1301.228959694" Dec 05 01:30:02 crc kubenswrapper[4990]: I1205 01:30:02.887471 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-b47qg" podStartSLOduration=28.672547523 podStartE2EDuration="32.887449877s" podCreationTimestamp="2025-12-05 01:29:30 +0000 UTC" firstStartedPulling="2025-12-05 01:29:56.269832827 +0000 UTC m=+1294.646048188" lastFinishedPulling="2025-12-05 01:30:00.484735191 +0000 UTC m=+1298.860950542" observedRunningTime="2025-12-05 01:30:02.869150858 +0000 UTC m=+1301.245366239" watchObservedRunningTime="2025-12-05 01:30:02.887449877 +0000 UTC m=+1301.263665238" Dec 05 01:30:03 crc kubenswrapper[4990]: I1205 01:30:03.564415 4990 generic.go:334] "Generic (PLEG): container finished" podID="ec540dc8-1600-4d52-9eec-ae9358ff0277" containerID="b9f419a3302b699ba4b83927282d6da97e8fc8e840536d94aee52b07a9aa71fd" exitCode=0 Dec 05 01:30:03 crc kubenswrapper[4990]: I1205 01:30:03.564464 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414970-m99cr" event={"ID":"ec540dc8-1600-4d52-9eec-ae9358ff0277","Type":"ContainerDied","Data":"b9f419a3302b699ba4b83927282d6da97e8fc8e840536d94aee52b07a9aa71fd"} Dec 05 01:30:03 crc kubenswrapper[4990]: I1205 01:30:03.568074 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-t29m4" event={"ID":"63a6f5c3-f437-478f-b72c-afcae7a4dba8","Type":"ContainerStarted","Data":"cfdacfd15bedf254c9159936fe3621539d5c0420ed3da9e87ab4a7c9cddcdb81"} Dec 05 01:30:03 crc kubenswrapper[4990]: I1205 01:30:03.618577 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-t29m4" podStartSLOduration=20.510054227 podStartE2EDuration="33.618555512s" podCreationTimestamp="2025-12-05 01:29:30 +0000 UTC" firstStartedPulling="2025-12-05 01:29:31.610543666 +0000 UTC m=+1269.986759027" lastFinishedPulling="2025-12-05 01:29:44.719044931 +0000 UTC m=+1283.095260312" observedRunningTime="2025-12-05 01:30:03.614905249 +0000 UTC m=+1301.991120630" watchObservedRunningTime="2025-12-05 01:30:03.618555512 +0000 UTC m=+1301.994770893" Dec 05 01:30:04 crc kubenswrapper[4990]: I1205 01:30:04.895551 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414970-m99cr" Dec 05 01:30:05 crc kubenswrapper[4990]: I1205 01:30:05.022108 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec540dc8-1600-4d52-9eec-ae9358ff0277-secret-volume\") pod \"ec540dc8-1600-4d52-9eec-ae9358ff0277\" (UID: \"ec540dc8-1600-4d52-9eec-ae9358ff0277\") " Dec 05 01:30:05 crc kubenswrapper[4990]: I1205 01:30:05.022353 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec540dc8-1600-4d52-9eec-ae9358ff0277-config-volume\") pod \"ec540dc8-1600-4d52-9eec-ae9358ff0277\" (UID: \"ec540dc8-1600-4d52-9eec-ae9358ff0277\") " Dec 05 01:30:05 crc kubenswrapper[4990]: I1205 01:30:05.022509 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2r9lp\" (UniqueName: \"kubernetes.io/projected/ec540dc8-1600-4d52-9eec-ae9358ff0277-kube-api-access-2r9lp\") pod \"ec540dc8-1600-4d52-9eec-ae9358ff0277\" (UID: \"ec540dc8-1600-4d52-9eec-ae9358ff0277\") " Dec 05 01:30:05 crc kubenswrapper[4990]: I1205 01:30:05.022904 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec540dc8-1600-4d52-9eec-ae9358ff0277-config-volume" (OuterVolumeSpecName: "config-volume") pod "ec540dc8-1600-4d52-9eec-ae9358ff0277" (UID: "ec540dc8-1600-4d52-9eec-ae9358ff0277"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:30:05 crc kubenswrapper[4990]: I1205 01:30:05.023188 4990 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec540dc8-1600-4d52-9eec-ae9358ff0277-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 01:30:05 crc kubenswrapper[4990]: I1205 01:30:05.027867 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec540dc8-1600-4d52-9eec-ae9358ff0277-kube-api-access-2r9lp" (OuterVolumeSpecName: "kube-api-access-2r9lp") pod "ec540dc8-1600-4d52-9eec-ae9358ff0277" (UID: "ec540dc8-1600-4d52-9eec-ae9358ff0277"). InnerVolumeSpecName "kube-api-access-2r9lp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:30:05 crc kubenswrapper[4990]: I1205 01:30:05.028225 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec540dc8-1600-4d52-9eec-ae9358ff0277-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ec540dc8-1600-4d52-9eec-ae9358ff0277" (UID: "ec540dc8-1600-4d52-9eec-ae9358ff0277"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:30:05 crc kubenswrapper[4990]: I1205 01:30:05.124038 4990 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec540dc8-1600-4d52-9eec-ae9358ff0277-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 01:30:05 crc kubenswrapper[4990]: I1205 01:30:05.124078 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2r9lp\" (UniqueName: \"kubernetes.io/projected/ec540dc8-1600-4d52-9eec-ae9358ff0277-kube-api-access-2r9lp\") on node \"crc\" DevicePath \"\"" Dec 05 01:30:05 crc kubenswrapper[4990]: I1205 01:30:05.589814 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-h8nnl" event={"ID":"f54f5881-49fa-4cfa-88d9-20d0b0d9c082","Type":"ContainerStarted","Data":"a17da142a2048615484f52c67d78699ee604f2ff04b6a92dd1576b3f6e965ebb"} Dec 05 01:30:05 crc kubenswrapper[4990]: I1205 01:30:05.591395 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414970-m99cr" event={"ID":"ec540dc8-1600-4d52-9eec-ae9358ff0277","Type":"ContainerDied","Data":"06e27329130cca557d338c05c7abd5513b701ca65554b1ac23f757121291a11d"} Dec 05 01:30:05 crc kubenswrapper[4990]: I1205 01:30:05.591447 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06e27329130cca557d338c05c7abd5513b701ca65554b1ac23f757121291a11d" Dec 05 01:30:05 crc kubenswrapper[4990]: I1205 01:30:05.591556 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414970-m99cr" Dec 05 01:30:05 crc kubenswrapper[4990]: I1205 01:30:05.621966 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-h8nnl" podStartSLOduration=22.190126519 podStartE2EDuration="35.621946237s" podCreationTimestamp="2025-12-05 01:29:30 +0000 UTC" firstStartedPulling="2025-12-05 01:29:31.249602425 +0000 UTC m=+1269.625817786" lastFinishedPulling="2025-12-05 01:29:44.681422103 +0000 UTC m=+1283.057637504" observedRunningTime="2025-12-05 01:30:05.617672275 +0000 UTC m=+1303.993887656" watchObservedRunningTime="2025-12-05 01:30:05.621946237 +0000 UTC m=+1303.998161598" Dec 05 01:30:06 crc kubenswrapper[4990]: I1205 01:30:06.452306 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-b47qg" Dec 05 01:30:06 crc kubenswrapper[4990]: I1205 01:30:06.682861 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt" Dec 05 01:30:07 crc kubenswrapper[4990]: I1205 01:30:07.010255 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-79966545b7-krksl" Dec 05 01:30:10 crc kubenswrapper[4990]: I1205 01:30:10.647937 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-lr2g9" Dec 05 01:30:10 crc kubenswrapper[4990]: I1205 01:30:10.885987 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-sqqtl" Dec 05 01:30:10 crc kubenswrapper[4990]: I1205 01:30:10.927832 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-88qgd" Dec 05 01:30:11 crc kubenswrapper[4990]: I1205 01:30:11.100266 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-gc6bg" Dec 05 01:30:11 crc kubenswrapper[4990]: I1205 01:30:11.168670 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-4fl28" Dec 05 01:30:11 crc kubenswrapper[4990]: I1205 01:30:11.244582 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-jfnfg" Dec 05 01:30:11 crc kubenswrapper[4990]: I1205 01:30:11.309947 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-mv9jg" Dec 05 01:30:26 crc kubenswrapper[4990]: I1205 01:30:26.594437 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-csbdn"] Dec 05 01:30:26 crc kubenswrapper[4990]: E1205 01:30:26.595410 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec540dc8-1600-4d52-9eec-ae9358ff0277" containerName="collect-profiles" Dec 05 01:30:26 crc kubenswrapper[4990]: I1205 01:30:26.595429 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec540dc8-1600-4d52-9eec-ae9358ff0277" containerName="collect-profiles" Dec 05 01:30:26 crc kubenswrapper[4990]: I1205 01:30:26.595631 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec540dc8-1600-4d52-9eec-ae9358ff0277" containerName="collect-profiles" Dec 05 01:30:26 crc kubenswrapper[4990]: I1205 01:30:26.596540 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-csbdn" Dec 05 01:30:26 crc kubenswrapper[4990]: I1205 01:30:26.602432 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 05 01:30:26 crc kubenswrapper[4990]: I1205 01:30:26.602743 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 05 01:30:26 crc kubenswrapper[4990]: I1205 01:30:26.602860 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-44nvg" Dec 05 01:30:26 crc kubenswrapper[4990]: I1205 01:30:26.602983 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 05 01:30:26 crc kubenswrapper[4990]: I1205 01:30:26.610016 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-csbdn"] Dec 05 01:30:26 crc kubenswrapper[4990]: I1205 01:30:26.675120 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-n9x8x"] Dec 05 01:30:26 crc kubenswrapper[4990]: I1205 01:30:26.677857 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-n9x8x" Dec 05 01:30:26 crc kubenswrapper[4990]: I1205 01:30:26.680082 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 05 01:30:26 crc kubenswrapper[4990]: I1205 01:30:26.686069 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-n9x8x"] Dec 05 01:30:26 crc kubenswrapper[4990]: I1205 01:30:26.765452 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6b0fdaa-52c8-4f83-8de1-4c6fee175ee3-config\") pod \"dnsmasq-dns-675f4bcbfc-csbdn\" (UID: \"e6b0fdaa-52c8-4f83-8de1-4c6fee175ee3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-csbdn" Dec 05 01:30:26 crc kubenswrapper[4990]: I1205 01:30:26.765625 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8whmd\" (UniqueName: \"kubernetes.io/projected/e6b0fdaa-52c8-4f83-8de1-4c6fee175ee3-kube-api-access-8whmd\") pod \"dnsmasq-dns-675f4bcbfc-csbdn\" (UID: \"e6b0fdaa-52c8-4f83-8de1-4c6fee175ee3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-csbdn" Dec 05 01:30:26 crc kubenswrapper[4990]: I1205 01:30:26.867799 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3370eea3-2944-41fe-8572-1b01dd5f39fa-config\") pod \"dnsmasq-dns-78dd6ddcc-n9x8x\" (UID: \"3370eea3-2944-41fe-8572-1b01dd5f39fa\") " pod="openstack/dnsmasq-dns-78dd6ddcc-n9x8x" Dec 05 01:30:26 crc kubenswrapper[4990]: I1205 01:30:26.867913 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3370eea3-2944-41fe-8572-1b01dd5f39fa-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-n9x8x\" (UID: \"3370eea3-2944-41fe-8572-1b01dd5f39fa\") " pod="openstack/dnsmasq-dns-78dd6ddcc-n9x8x" Dec 05 01:30:26 crc kubenswrapper[4990]: I1205 01:30:26.867988 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6b0fdaa-52c8-4f83-8de1-4c6fee175ee3-config\") pod \"dnsmasq-dns-675f4bcbfc-csbdn\" (UID: \"e6b0fdaa-52c8-4f83-8de1-4c6fee175ee3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-csbdn" Dec 05 01:30:26 crc kubenswrapper[4990]: I1205 01:30:26.868032 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2vsf\" (UniqueName: \"kubernetes.io/projected/3370eea3-2944-41fe-8572-1b01dd5f39fa-kube-api-access-d2vsf\") pod \"dnsmasq-dns-78dd6ddcc-n9x8x\" (UID: \"3370eea3-2944-41fe-8572-1b01dd5f39fa\") " pod="openstack/dnsmasq-dns-78dd6ddcc-n9x8x" Dec 05 01:30:26 crc kubenswrapper[4990]: I1205 01:30:26.868079 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8whmd\" (UniqueName: \"kubernetes.io/projected/e6b0fdaa-52c8-4f83-8de1-4c6fee175ee3-kube-api-access-8whmd\") pod \"dnsmasq-dns-675f4bcbfc-csbdn\" (UID: \"e6b0fdaa-52c8-4f83-8de1-4c6fee175ee3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-csbdn" Dec 05 01:30:26 crc kubenswrapper[4990]: I1205 01:30:26.869354 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6b0fdaa-52c8-4f83-8de1-4c6fee175ee3-config\") pod \"dnsmasq-dns-675f4bcbfc-csbdn\" (UID: \"e6b0fdaa-52c8-4f83-8de1-4c6fee175ee3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-csbdn" Dec 05 01:30:26 crc kubenswrapper[4990]: I1205 01:30:26.885754 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8whmd\" (UniqueName: \"kubernetes.io/projected/e6b0fdaa-52c8-4f83-8de1-4c6fee175ee3-kube-api-access-8whmd\") pod \"dnsmasq-dns-675f4bcbfc-csbdn\" (UID: \"e6b0fdaa-52c8-4f83-8de1-4c6fee175ee3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-csbdn" Dec 05 01:30:26 crc kubenswrapper[4990]: I1205 01:30:26.920447 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-csbdn" Dec 05 01:30:26 crc kubenswrapper[4990]: I1205 01:30:26.968642 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3370eea3-2944-41fe-8572-1b01dd5f39fa-config\") pod \"dnsmasq-dns-78dd6ddcc-n9x8x\" (UID: \"3370eea3-2944-41fe-8572-1b01dd5f39fa\") " pod="openstack/dnsmasq-dns-78dd6ddcc-n9x8x" Dec 05 01:30:26 crc kubenswrapper[4990]: I1205 01:30:26.968706 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3370eea3-2944-41fe-8572-1b01dd5f39fa-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-n9x8x\" (UID: \"3370eea3-2944-41fe-8572-1b01dd5f39fa\") " pod="openstack/dnsmasq-dns-78dd6ddcc-n9x8x" Dec 05 01:30:26 crc kubenswrapper[4990]: I1205 01:30:26.968744 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2vsf\" (UniqueName: \"kubernetes.io/projected/3370eea3-2944-41fe-8572-1b01dd5f39fa-kube-api-access-d2vsf\") pod \"dnsmasq-dns-78dd6ddcc-n9x8x\" (UID: \"3370eea3-2944-41fe-8572-1b01dd5f39fa\") " pod="openstack/dnsmasq-dns-78dd6ddcc-n9x8x" Dec 05 01:30:26 crc kubenswrapper[4990]: I1205 01:30:26.969931 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3370eea3-2944-41fe-8572-1b01dd5f39fa-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-n9x8x\" (UID: \"3370eea3-2944-41fe-8572-1b01dd5f39fa\") " pod="openstack/dnsmasq-dns-78dd6ddcc-n9x8x" Dec 05 01:30:26 crc kubenswrapper[4990]: I1205 01:30:26.970436 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3370eea3-2944-41fe-8572-1b01dd5f39fa-config\") pod \"dnsmasq-dns-78dd6ddcc-n9x8x\" (UID: \"3370eea3-2944-41fe-8572-1b01dd5f39fa\") " pod="openstack/dnsmasq-dns-78dd6ddcc-n9x8x" Dec 05 01:30:26 crc kubenswrapper[4990]: I1205 01:30:26.990007 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2vsf\" (UniqueName: \"kubernetes.io/projected/3370eea3-2944-41fe-8572-1b01dd5f39fa-kube-api-access-d2vsf\") pod \"dnsmasq-dns-78dd6ddcc-n9x8x\" (UID: \"3370eea3-2944-41fe-8572-1b01dd5f39fa\") " pod="openstack/dnsmasq-dns-78dd6ddcc-n9x8x" Dec 05 01:30:26 crc kubenswrapper[4990]: I1205 01:30:26.996107 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-n9x8x" Dec 05 01:30:27 crc kubenswrapper[4990]: I1205 01:30:27.355003 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-csbdn"] Dec 05 01:30:27 crc kubenswrapper[4990]: I1205 01:30:27.415451 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-n9x8x"] Dec 05 01:30:27 crc kubenswrapper[4990]: W1205 01:30:27.420897 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3370eea3_2944_41fe_8572_1b01dd5f39fa.slice/crio-73a96c20a83ba198efe6ba765e44f7c57aaee201b3bfa9450cc6a23796ab8ffa WatchSource:0}: Error finding container 73a96c20a83ba198efe6ba765e44f7c57aaee201b3bfa9450cc6a23796ab8ffa: Status 404 returned error can't find the container with id 73a96c20a83ba198efe6ba765e44f7c57aaee201b3bfa9450cc6a23796ab8ffa Dec 05 01:30:27 crc kubenswrapper[4990]: I1205 01:30:27.809807 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-n9x8x" event={"ID":"3370eea3-2944-41fe-8572-1b01dd5f39fa","Type":"ContainerStarted","Data":"73a96c20a83ba198efe6ba765e44f7c57aaee201b3bfa9450cc6a23796ab8ffa"} Dec 05 01:30:27 crc kubenswrapper[4990]: I1205 01:30:27.812090 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-csbdn" event={"ID":"e6b0fdaa-52c8-4f83-8de1-4c6fee175ee3","Type":"ContainerStarted","Data":"3a9fcef823f50fd7c860ba781de98dc83743e9b26e0cb10cceeaba86b6469e47"} Dec 05 01:30:29 crc kubenswrapper[4990]: I1205 01:30:29.178754 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-csbdn"] Dec 05 01:30:29 crc kubenswrapper[4990]: I1205 01:30:29.206734 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-qv2fl"] Dec 05 01:30:29 crc kubenswrapper[4990]: I1205 01:30:29.208216 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-qv2fl" Dec 05 01:30:29 crc kubenswrapper[4990]: I1205 01:30:29.223037 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-qv2fl"] Dec 05 01:30:29 crc kubenswrapper[4990]: I1205 01:30:29.314653 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42qxj\" (UniqueName: \"kubernetes.io/projected/d59839fd-b60f-4cb2-bfb7-166ad40576f2-kube-api-access-42qxj\") pod \"dnsmasq-dns-5ccc8479f9-qv2fl\" (UID: \"d59839fd-b60f-4cb2-bfb7-166ad40576f2\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qv2fl" Dec 05 01:30:29 crc kubenswrapper[4990]: I1205 01:30:29.314801 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d59839fd-b60f-4cb2-bfb7-166ad40576f2-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-qv2fl\" (UID: \"d59839fd-b60f-4cb2-bfb7-166ad40576f2\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qv2fl" Dec 05 01:30:29 crc kubenswrapper[4990]: I1205 01:30:29.314896 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d59839fd-b60f-4cb2-bfb7-166ad40576f2-config\") pod \"dnsmasq-dns-5ccc8479f9-qv2fl\" (UID: \"d59839fd-b60f-4cb2-bfb7-166ad40576f2\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qv2fl" Dec 05 01:30:29 crc kubenswrapper[4990]: I1205 01:30:29.417064 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d59839fd-b60f-4cb2-bfb7-166ad40576f2-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-qv2fl\" (UID: \"d59839fd-b60f-4cb2-bfb7-166ad40576f2\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qv2fl" Dec 05 01:30:29 crc kubenswrapper[4990]: I1205 01:30:29.417123 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d59839fd-b60f-4cb2-bfb7-166ad40576f2-config\") pod \"dnsmasq-dns-5ccc8479f9-qv2fl\" (UID: \"d59839fd-b60f-4cb2-bfb7-166ad40576f2\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qv2fl" Dec 05 01:30:29 crc kubenswrapper[4990]: I1205 01:30:29.417182 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42qxj\" (UniqueName: \"kubernetes.io/projected/d59839fd-b60f-4cb2-bfb7-166ad40576f2-kube-api-access-42qxj\") pod \"dnsmasq-dns-5ccc8479f9-qv2fl\" (UID: \"d59839fd-b60f-4cb2-bfb7-166ad40576f2\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qv2fl" Dec 05 01:30:29 crc kubenswrapper[4990]: I1205 01:30:29.418684 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d59839fd-b60f-4cb2-bfb7-166ad40576f2-config\") pod \"dnsmasq-dns-5ccc8479f9-qv2fl\" (UID: \"d59839fd-b60f-4cb2-bfb7-166ad40576f2\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qv2fl" Dec 05 01:30:29 crc kubenswrapper[4990]: I1205 01:30:29.418855 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d59839fd-b60f-4cb2-bfb7-166ad40576f2-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-qv2fl\" (UID: \"d59839fd-b60f-4cb2-bfb7-166ad40576f2\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qv2fl" Dec 05 01:30:29 crc kubenswrapper[4990]: I1205 01:30:29.453559 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42qxj\" (UniqueName: \"kubernetes.io/projected/d59839fd-b60f-4cb2-bfb7-166ad40576f2-kube-api-access-42qxj\") pod \"dnsmasq-dns-5ccc8479f9-qv2fl\" (UID: \"d59839fd-b60f-4cb2-bfb7-166ad40576f2\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qv2fl" Dec 05 01:30:29 crc kubenswrapper[4990]: I1205 01:30:29.458796 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-n9x8x"] Dec 05 01:30:29 crc kubenswrapper[4990]: I1205 01:30:29.498519 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7rldr"] Dec 05 01:30:29 crc kubenswrapper[4990]: I1205 01:30:29.499927 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7rldr" Dec 05 01:30:29 crc kubenswrapper[4990]: I1205 01:30:29.506630 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7rldr"] Dec 05 01:30:29 crc kubenswrapper[4990]: I1205 01:30:29.534078 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-qv2fl" Dec 05 01:30:29 crc kubenswrapper[4990]: I1205 01:30:29.624731 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjljs\" (UniqueName: \"kubernetes.io/projected/d858e3af-1688-433c-ad50-09aef898edda-kube-api-access-gjljs\") pod \"dnsmasq-dns-57d769cc4f-7rldr\" (UID: \"d858e3af-1688-433c-ad50-09aef898edda\") " pod="openstack/dnsmasq-dns-57d769cc4f-7rldr" Dec 05 01:30:29 crc kubenswrapper[4990]: I1205 01:30:29.624822 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d858e3af-1688-433c-ad50-09aef898edda-config\") pod \"dnsmasq-dns-57d769cc4f-7rldr\" (UID: \"d858e3af-1688-433c-ad50-09aef898edda\") " pod="openstack/dnsmasq-dns-57d769cc4f-7rldr" Dec 05 01:30:29 crc kubenswrapper[4990]: I1205 01:30:29.624901 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d858e3af-1688-433c-ad50-09aef898edda-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7rldr\" (UID: \"d858e3af-1688-433c-ad50-09aef898edda\") " pod="openstack/dnsmasq-dns-57d769cc4f-7rldr" Dec 05 01:30:29 crc kubenswrapper[4990]: I1205 01:30:29.725822 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjljs\" (UniqueName: \"kubernetes.io/projected/d858e3af-1688-433c-ad50-09aef898edda-kube-api-access-gjljs\") pod \"dnsmasq-dns-57d769cc4f-7rldr\" (UID: \"d858e3af-1688-433c-ad50-09aef898edda\") " pod="openstack/dnsmasq-dns-57d769cc4f-7rldr" Dec 05 01:30:29 crc kubenswrapper[4990]: I1205 01:30:29.725905 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d858e3af-1688-433c-ad50-09aef898edda-config\") pod \"dnsmasq-dns-57d769cc4f-7rldr\" (UID: \"d858e3af-1688-433c-ad50-09aef898edda\") " pod="openstack/dnsmasq-dns-57d769cc4f-7rldr" Dec 05 01:30:29 crc kubenswrapper[4990]: I1205 01:30:29.725975 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d858e3af-1688-433c-ad50-09aef898edda-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7rldr\" (UID: \"d858e3af-1688-433c-ad50-09aef898edda\") " pod="openstack/dnsmasq-dns-57d769cc4f-7rldr" Dec 05 01:30:29 crc kubenswrapper[4990]: I1205 01:30:29.726885 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d858e3af-1688-433c-ad50-09aef898edda-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7rldr\" (UID: \"d858e3af-1688-433c-ad50-09aef898edda\") " pod="openstack/dnsmasq-dns-57d769cc4f-7rldr" Dec 05 01:30:29 crc kubenswrapper[4990]: I1205 01:30:29.728304 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d858e3af-1688-433c-ad50-09aef898edda-config\") pod \"dnsmasq-dns-57d769cc4f-7rldr\" (UID: \"d858e3af-1688-433c-ad50-09aef898edda\") " pod="openstack/dnsmasq-dns-57d769cc4f-7rldr" Dec 05 01:30:29 crc kubenswrapper[4990]: I1205 01:30:29.741649 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjljs\" (UniqueName: \"kubernetes.io/projected/d858e3af-1688-433c-ad50-09aef898edda-kube-api-access-gjljs\") pod \"dnsmasq-dns-57d769cc4f-7rldr\" (UID: \"d858e3af-1688-433c-ad50-09aef898edda\") " pod="openstack/dnsmasq-dns-57d769cc4f-7rldr" Dec 05 01:30:29 crc kubenswrapper[4990]: I1205 01:30:29.802587 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-qv2fl"] Dec 05 01:30:29 crc kubenswrapper[4990]: I1205 01:30:29.824437 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7rldr" Dec 05 01:30:29 crc kubenswrapper[4990]: I1205 01:30:29.847853 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-qv2fl" event={"ID":"d59839fd-b60f-4cb2-bfb7-166ad40576f2","Type":"ContainerStarted","Data":"4304e6c8b4f52a819676c1c0234a78ce649800a398abe5e62c92094b71cb0756"} Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.313154 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7rldr"] Dec 05 01:30:30 crc kubenswrapper[4990]: W1205 01:30:30.324915 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd858e3af_1688_433c_ad50_09aef898edda.slice/crio-f37fcc88a7dc384d40166777177b274125cda284fe438002f822ef5abc4d83cd WatchSource:0}: Error finding container f37fcc88a7dc384d40166777177b274125cda284fe438002f822ef5abc4d83cd: Status 404 returned error can't find the container with id f37fcc88a7dc384d40166777177b274125cda284fe438002f822ef5abc4d83cd Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.340621 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.342672 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.344242 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.349755 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.350164 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.350409 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.351022 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-jkknc" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.352201 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.353992 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.359253 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.541493 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed473a7a-f068-49a3-ae4c-b57b39e33b28-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.541857 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ed473a7a-f068-49a3-ae4c-b57b39e33b28-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.541885 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ed473a7a-f068-49a3-ae4c-b57b39e33b28-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.541927 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ed473a7a-f068-49a3-ae4c-b57b39e33b28-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.541949 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ed473a7a-f068-49a3-ae4c-b57b39e33b28-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.541967 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ed473a7a-f068-49a3-ae4c-b57b39e33b28-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.541985 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ed473a7a-f068-49a3-ae4c-b57b39e33b28-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.542015 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ed473a7a-f068-49a3-ae4c-b57b39e33b28-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.542036 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ed473a7a-f068-49a3-ae4c-b57b39e33b28-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.542056 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh5m4\" (UniqueName: \"kubernetes.io/projected/ed473a7a-f068-49a3-ae4c-b57b39e33b28-kube-api-access-dh5m4\") pod \"rabbitmq-cell1-server-0\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.542076 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.635280 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.639344 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.642846 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.643119 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.643690 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.642880 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-tjp5h" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.644095 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ed473a7a-f068-49a3-ae4c-b57b39e33b28-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.644556 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ed473a7a-f068-49a3-ae4c-b57b39e33b28-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.644699 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ed473a7a-f068-49a3-ae4c-b57b39e33b28-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.645193 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.646068 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.645117 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ed473a7a-f068-49a3-ae4c-b57b39e33b28-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.647268 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ed473a7a-f068-49a3-ae4c-b57b39e33b28-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.647389 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ed473a7a-f068-49a3-ae4c-b57b39e33b28-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.648235 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh5m4\" (UniqueName: \"kubernetes.io/projected/ed473a7a-f068-49a3-ae4c-b57b39e33b28-kube-api-access-dh5m4\") pod \"rabbitmq-cell1-server-0\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.648347 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.651756 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed473a7a-f068-49a3-ae4c-b57b39e33b28-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.651931 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ed473a7a-f068-49a3-ae4c-b57b39e33b28-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.652052 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ed473a7a-f068-49a3-ae4c-b57b39e33b28-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.653319 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ed473a7a-f068-49a3-ae4c-b57b39e33b28-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.653724 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ed473a7a-f068-49a3-ae4c-b57b39e33b28-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.644823 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ed473a7a-f068-49a3-ae4c-b57b39e33b28-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.654136 4990 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.642982 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.665683 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed473a7a-f068-49a3-ae4c-b57b39e33b28-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.666871 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ed473a7a-f068-49a3-ae4c-b57b39e33b28-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.667332 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ed473a7a-f068-49a3-ae4c-b57b39e33b28-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.674194 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ed473a7a-f068-49a3-ae4c-b57b39e33b28-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.672543 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.668352 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ed473a7a-f068-49a3-ae4c-b57b39e33b28-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.681239 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ed473a7a-f068-49a3-ae4c-b57b39e33b28-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.683598 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh5m4\" (UniqueName: \"kubernetes.io/projected/ed473a7a-f068-49a3-ae4c-b57b39e33b28-kube-api-access-dh5m4\") pod \"rabbitmq-cell1-server-0\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.745967 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.753947 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/809c1920-3205-411c-a8c1-ed027b7e3b1f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.753995 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/809c1920-3205-411c-a8c1-ed027b7e3b1f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.754013 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/809c1920-3205-411c-a8c1-ed027b7e3b1f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.754032 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/809c1920-3205-411c-a8c1-ed027b7e3b1f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.754048 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/809c1920-3205-411c-a8c1-ed027b7e3b1f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.754062 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24ms9\" (UniqueName: \"kubernetes.io/projected/809c1920-3205-411c-a8c1-ed027b7e3b1f-kube-api-access-24ms9\") pod \"rabbitmq-server-0\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.754097 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/809c1920-3205-411c-a8c1-ed027b7e3b1f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.754115 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/809c1920-3205-411c-a8c1-ed027b7e3b1f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.754136 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.754568 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/809c1920-3205-411c-a8c1-ed027b7e3b1f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.754623 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/809c1920-3205-411c-a8c1-ed027b7e3b1f-config-data\") pod \"rabbitmq-server-0\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.856255 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/809c1920-3205-411c-a8c1-ed027b7e3b1f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.856291 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/809c1920-3205-411c-a8c1-ed027b7e3b1f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.856331 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.856386 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/809c1920-3205-411c-a8c1-ed027b7e3b1f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.856403 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/809c1920-3205-411c-a8c1-ed027b7e3b1f-config-data\") pod \"rabbitmq-server-0\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.856438 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/809c1920-3205-411c-a8c1-ed027b7e3b1f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.856458 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/809c1920-3205-411c-a8c1-ed027b7e3b1f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.856509 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/809c1920-3205-411c-a8c1-ed027b7e3b1f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.856531 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/809c1920-3205-411c-a8c1-ed027b7e3b1f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.856545 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/809c1920-3205-411c-a8c1-ed027b7e3b1f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.856561 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24ms9\" (UniqueName: \"kubernetes.io/projected/809c1920-3205-411c-a8c1-ed027b7e3b1f-kube-api-access-24ms9\") pod \"rabbitmq-server-0\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.857043 4990 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.857976 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/809c1920-3205-411c-a8c1-ed027b7e3b1f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.858257 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/809c1920-3205-411c-a8c1-ed027b7e3b1f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.858552 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/809c1920-3205-411c-a8c1-ed027b7e3b1f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.858577 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/809c1920-3205-411c-a8c1-ed027b7e3b1f-config-data\") pod \"rabbitmq-server-0\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.858728 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/809c1920-3205-411c-a8c1-ed027b7e3b1f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.860675 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7rldr" event={"ID":"d858e3af-1688-433c-ad50-09aef898edda","Type":"ContainerStarted","Data":"f37fcc88a7dc384d40166777177b274125cda284fe438002f822ef5abc4d83cd"} Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.860951 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/809c1920-3205-411c-a8c1-ed027b7e3b1f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.877986 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/809c1920-3205-411c-a8c1-ed027b7e3b1f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.878999 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/809c1920-3205-411c-a8c1-ed027b7e3b1f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.879234 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/809c1920-3205-411c-a8c1-ed027b7e3b1f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.880736 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24ms9\" (UniqueName: \"kubernetes.io/projected/809c1920-3205-411c-a8c1-ed027b7e3b1f-kube-api-access-24ms9\") pod \"rabbitmq-server-0\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.881875 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " pod="openstack/rabbitmq-server-0" Dec 05 01:30:30 crc kubenswrapper[4990]: I1205 01:30:30.962383 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.098099 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.434290 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 01:30:31 crc kubenswrapper[4990]: W1205 01:30:31.448638 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded473a7a_f068_49a3_ae4c_b57b39e33b28.slice/crio-2463bc5c975a8980bdf5525196080f584d67342d7e96699073d6039c30d943d0 WatchSource:0}: Error finding container 2463bc5c975a8980bdf5525196080f584d67342d7e96699073d6039c30d943d0: Status 404 returned error can't find the container with id 2463bc5c975a8980bdf5525196080f584d67342d7e96699073d6039c30d943d0 Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.543764 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 01:30:31 crc kubenswrapper[4990]: W1205 01:30:31.551969 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod809c1920_3205_411c_a8c1_ed027b7e3b1f.slice/crio-7c3a5ff8572e5e01dccbeb3b390173293ce194495e3da7f9a10d6eb438c0a293 WatchSource:0}: Error finding container 7c3a5ff8572e5e01dccbeb3b390173293ce194495e3da7f9a10d6eb438c0a293: Status 404 returned error can't find the container with id 7c3a5ff8572e5e01dccbeb3b390173293ce194495e3da7f9a10d6eb438c0a293 Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.749909 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.751508 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.753601 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.754725 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.757647 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.757667 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-j8mtq" Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.758584 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.762817 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.870415 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00beb76a-d4d2-4cd8-bc04-e268c2397388-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"00beb76a-d4d2-4cd8-bc04-e268c2397388\") " pod="openstack/openstack-galera-0" Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.870469 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00beb76a-d4d2-4cd8-bc04-e268c2397388-operator-scripts\") pod \"openstack-galera-0\" (UID: \"00beb76a-d4d2-4cd8-bc04-e268c2397388\") " pod="openstack/openstack-galera-0" Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.870565 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/00beb76a-d4d2-4cd8-bc04-e268c2397388-config-data-generated\") pod \"openstack-galera-0\" (UID: \"00beb76a-d4d2-4cd8-bc04-e268c2397388\") " pod="openstack/openstack-galera-0" Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.870610 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/00beb76a-d4d2-4cd8-bc04-e268c2397388-config-data-default\") pod \"openstack-galera-0\" (UID: \"00beb76a-d4d2-4cd8-bc04-e268c2397388\") " pod="openstack/openstack-galera-0" Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.870641 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/00beb76a-d4d2-4cd8-bc04-e268c2397388-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"00beb76a-d4d2-4cd8-bc04-e268c2397388\") " pod="openstack/openstack-galera-0" Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.870723 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"00beb76a-d4d2-4cd8-bc04-e268c2397388\") " pod="openstack/openstack-galera-0" Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.870745 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/00beb76a-d4d2-4cd8-bc04-e268c2397388-kolla-config\") pod \"openstack-galera-0\" (UID: \"00beb76a-d4d2-4cd8-bc04-e268c2397388\") " pod="openstack/openstack-galera-0" Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.870866 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjvf7\" (UniqueName: \"kubernetes.io/projected/00beb76a-d4d2-4cd8-bc04-e268c2397388-kube-api-access-wjvf7\") pod \"openstack-galera-0\" (UID: \"00beb76a-d4d2-4cd8-bc04-e268c2397388\") " pod="openstack/openstack-galera-0" Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.874648 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ed473a7a-f068-49a3-ae4c-b57b39e33b28","Type":"ContainerStarted","Data":"2463bc5c975a8980bdf5525196080f584d67342d7e96699073d6039c30d943d0"} Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.876203 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"809c1920-3205-411c-a8c1-ed027b7e3b1f","Type":"ContainerStarted","Data":"7c3a5ff8572e5e01dccbeb3b390173293ce194495e3da7f9a10d6eb438c0a293"} Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.977834 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"00beb76a-d4d2-4cd8-bc04-e268c2397388\") " pod="openstack/openstack-galera-0" Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.977902 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/00beb76a-d4d2-4cd8-bc04-e268c2397388-kolla-config\") pod \"openstack-galera-0\" (UID: \"00beb76a-d4d2-4cd8-bc04-e268c2397388\") " pod="openstack/openstack-galera-0" Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.977963 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjvf7\" (UniqueName: \"kubernetes.io/projected/00beb76a-d4d2-4cd8-bc04-e268c2397388-kube-api-access-wjvf7\") pod \"openstack-galera-0\" (UID: \"00beb76a-d4d2-4cd8-bc04-e268c2397388\") " pod="openstack/openstack-galera-0" Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.978058 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00beb76a-d4d2-4cd8-bc04-e268c2397388-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"00beb76a-d4d2-4cd8-bc04-e268c2397388\") " pod="openstack/openstack-galera-0" Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.978081 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00beb76a-d4d2-4cd8-bc04-e268c2397388-operator-scripts\") pod \"openstack-galera-0\" (UID: \"00beb76a-d4d2-4cd8-bc04-e268c2397388\") " pod="openstack/openstack-galera-0" Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.978114 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/00beb76a-d4d2-4cd8-bc04-e268c2397388-config-data-generated\") pod \"openstack-galera-0\" (UID: \"00beb76a-d4d2-4cd8-bc04-e268c2397388\") " pod="openstack/openstack-galera-0" Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.978179 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/00beb76a-d4d2-4cd8-bc04-e268c2397388-config-data-default\") pod \"openstack-galera-0\" (UID: \"00beb76a-d4d2-4cd8-bc04-e268c2397388\") " pod="openstack/openstack-galera-0" Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.978217 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/00beb76a-d4d2-4cd8-bc04-e268c2397388-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"00beb76a-d4d2-4cd8-bc04-e268c2397388\") " pod="openstack/openstack-galera-0" Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.979749 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/00beb76a-d4d2-4cd8-bc04-e268c2397388-kolla-config\") pod \"openstack-galera-0\" (UID: \"00beb76a-d4d2-4cd8-bc04-e268c2397388\") " pod="openstack/openstack-galera-0" Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.980088 4990 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"00beb76a-d4d2-4cd8-bc04-e268c2397388\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-galera-0" Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.981447 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/00beb76a-d4d2-4cd8-bc04-e268c2397388-config-data-default\") pod \"openstack-galera-0\" (UID: \"00beb76a-d4d2-4cd8-bc04-e268c2397388\") " pod="openstack/openstack-galera-0" Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.982171 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/00beb76a-d4d2-4cd8-bc04-e268c2397388-config-data-generated\") pod \"openstack-galera-0\" (UID: \"00beb76a-d4d2-4cd8-bc04-e268c2397388\") " pod="openstack/openstack-galera-0" Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.985046 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/00beb76a-d4d2-4cd8-bc04-e268c2397388-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"00beb76a-d4d2-4cd8-bc04-e268c2397388\") " pod="openstack/openstack-galera-0" Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.985978 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00beb76a-d4d2-4cd8-bc04-e268c2397388-operator-scripts\") pod \"openstack-galera-0\" (UID: \"00beb76a-d4d2-4cd8-bc04-e268c2397388\") " pod="openstack/openstack-galera-0" Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.997291 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00beb76a-d4d2-4cd8-bc04-e268c2397388-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"00beb76a-d4d2-4cd8-bc04-e268c2397388\") " pod="openstack/openstack-galera-0" Dec 05 01:30:31 crc kubenswrapper[4990]: I1205 01:30:31.998240 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjvf7\" (UniqueName: \"kubernetes.io/projected/00beb76a-d4d2-4cd8-bc04-e268c2397388-kube-api-access-wjvf7\") pod \"openstack-galera-0\" (UID: \"00beb76a-d4d2-4cd8-bc04-e268c2397388\") " pod="openstack/openstack-galera-0" Dec 05 01:30:32 crc kubenswrapper[4990]: I1205 01:30:32.012162 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"00beb76a-d4d2-4cd8-bc04-e268c2397388\") " pod="openstack/openstack-galera-0" Dec 05 01:30:32 crc kubenswrapper[4990]: I1205 01:30:32.076744 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 05 01:30:32 crc kubenswrapper[4990]: I1205 01:30:32.536130 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 05 01:30:32 crc kubenswrapper[4990]: W1205 01:30:32.545035 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00beb76a_d4d2_4cd8_bc04_e268c2397388.slice/crio-f0a0fcb7173185444d18bcea3eb5a01cdd9f7a5e66c7e956200719c63cb6978e WatchSource:0}: Error finding container f0a0fcb7173185444d18bcea3eb5a01cdd9f7a5e66c7e956200719c63cb6978e: Status 404 returned error can't find the container with id f0a0fcb7173185444d18bcea3eb5a01cdd9f7a5e66c7e956200719c63cb6978e Dec 05 01:30:32 crc kubenswrapper[4990]: I1205 01:30:32.883682 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"00beb76a-d4d2-4cd8-bc04-e268c2397388","Type":"ContainerStarted","Data":"f0a0fcb7173185444d18bcea3eb5a01cdd9f7a5e66c7e956200719c63cb6978e"} Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.154006 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.156152 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.159848 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.160794 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.162578 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-q2vcv" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.164138 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.174593 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.311810 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2c281c58-a95e-4669-bdfc-465759817928-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"2c281c58-a95e-4669-bdfc-465759817928\") " pod="openstack/openstack-cell1-galera-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.312425 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zrnp\" (UniqueName: \"kubernetes.io/projected/2c281c58-a95e-4669-bdfc-465759817928-kube-api-access-7zrnp\") pod \"openstack-cell1-galera-0\" (UID: \"2c281c58-a95e-4669-bdfc-465759817928\") " pod="openstack/openstack-cell1-galera-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.312573 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c281c58-a95e-4669-bdfc-465759817928-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"2c281c58-a95e-4669-bdfc-465759817928\") " pod="openstack/openstack-cell1-galera-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.312630 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2c281c58-a95e-4669-bdfc-465759817928-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"2c281c58-a95e-4669-bdfc-465759817928\") " pod="openstack/openstack-cell1-galera-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.312690 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"2c281c58-a95e-4669-bdfc-465759817928\") " pod="openstack/openstack-cell1-galera-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.312837 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c281c58-a95e-4669-bdfc-465759817928-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"2c281c58-a95e-4669-bdfc-465759817928\") " pod="openstack/openstack-cell1-galera-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.312915 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2c281c58-a95e-4669-bdfc-465759817928-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"2c281c58-a95e-4669-bdfc-465759817928\") " pod="openstack/openstack-cell1-galera-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.312939 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c281c58-a95e-4669-bdfc-465759817928-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"2c281c58-a95e-4669-bdfc-465759817928\") " pod="openstack/openstack-cell1-galera-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.414592 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2c281c58-a95e-4669-bdfc-465759817928-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"2c281c58-a95e-4669-bdfc-465759817928\") " pod="openstack/openstack-cell1-galera-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.414708 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zrnp\" (UniqueName: \"kubernetes.io/projected/2c281c58-a95e-4669-bdfc-465759817928-kube-api-access-7zrnp\") pod \"openstack-cell1-galera-0\" (UID: \"2c281c58-a95e-4669-bdfc-465759817928\") " pod="openstack/openstack-cell1-galera-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.414872 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c281c58-a95e-4669-bdfc-465759817928-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"2c281c58-a95e-4669-bdfc-465759817928\") " pod="openstack/openstack-cell1-galera-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.414968 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2c281c58-a95e-4669-bdfc-465759817928-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"2c281c58-a95e-4669-bdfc-465759817928\") " pod="openstack/openstack-cell1-galera-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.415098 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"2c281c58-a95e-4669-bdfc-465759817928\") " pod="openstack/openstack-cell1-galera-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.415196 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c281c58-a95e-4669-bdfc-465759817928-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"2c281c58-a95e-4669-bdfc-465759817928\") " pod="openstack/openstack-cell1-galera-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.415275 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c281c58-a95e-4669-bdfc-465759817928-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"2c281c58-a95e-4669-bdfc-465759817928\") " pod="openstack/openstack-cell1-galera-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.415327 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2c281c58-a95e-4669-bdfc-465759817928-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"2c281c58-a95e-4669-bdfc-465759817928\") " pod="openstack/openstack-cell1-galera-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.415943 4990 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"2c281c58-a95e-4669-bdfc-465759817928\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-cell1-galera-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.415978 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2c281c58-a95e-4669-bdfc-465759817928-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"2c281c58-a95e-4669-bdfc-465759817928\") " pod="openstack/openstack-cell1-galera-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.416137 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2c281c58-a95e-4669-bdfc-465759817928-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"2c281c58-a95e-4669-bdfc-465759817928\") " pod="openstack/openstack-cell1-galera-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.416223 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2c281c58-a95e-4669-bdfc-465759817928-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"2c281c58-a95e-4669-bdfc-465759817928\") " pod="openstack/openstack-cell1-galera-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.418076 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c281c58-a95e-4669-bdfc-465759817928-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"2c281c58-a95e-4669-bdfc-465759817928\") " pod="openstack/openstack-cell1-galera-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.423052 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c281c58-a95e-4669-bdfc-465759817928-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"2c281c58-a95e-4669-bdfc-465759817928\") " pod="openstack/openstack-cell1-galera-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.431373 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c281c58-a95e-4669-bdfc-465759817928-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"2c281c58-a95e-4669-bdfc-465759817928\") " pod="openstack/openstack-cell1-galera-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.442037 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zrnp\" (UniqueName: \"kubernetes.io/projected/2c281c58-a95e-4669-bdfc-465759817928-kube-api-access-7zrnp\") pod \"openstack-cell1-galera-0\" (UID: \"2c281c58-a95e-4669-bdfc-465759817928\") " pod="openstack/openstack-cell1-galera-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.458528 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"2c281c58-a95e-4669-bdfc-465759817928\") " pod="openstack/openstack-cell1-galera-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.478065 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.547809 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.549014 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.550697 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-xbgjx" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.550893 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.551814 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.560420 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.618362 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd4b299f-9ab6-4714-b911-9b1e11708f39-combined-ca-bundle\") pod \"memcached-0\" (UID: \"cd4b299f-9ab6-4714-b911-9b1e11708f39\") " pod="openstack/memcached-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.618418 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cd4b299f-9ab6-4714-b911-9b1e11708f39-config-data\") pod \"memcached-0\" (UID: \"cd4b299f-9ab6-4714-b911-9b1e11708f39\") " pod="openstack/memcached-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.618512 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd4b299f-9ab6-4714-b911-9b1e11708f39-memcached-tls-certs\") pod \"memcached-0\" (UID: \"cd4b299f-9ab6-4714-b911-9b1e11708f39\") " pod="openstack/memcached-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.618548 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kxhw\" (UniqueName: \"kubernetes.io/projected/cd4b299f-9ab6-4714-b911-9b1e11708f39-kube-api-access-8kxhw\") pod \"memcached-0\" (UID: \"cd4b299f-9ab6-4714-b911-9b1e11708f39\") " pod="openstack/memcached-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.618587 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cd4b299f-9ab6-4714-b911-9b1e11708f39-kolla-config\") pod \"memcached-0\" (UID: \"cd4b299f-9ab6-4714-b911-9b1e11708f39\") " pod="openstack/memcached-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.719640 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd4b299f-9ab6-4714-b911-9b1e11708f39-memcached-tls-certs\") pod \"memcached-0\" (UID: \"cd4b299f-9ab6-4714-b911-9b1e11708f39\") " pod="openstack/memcached-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.719989 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kxhw\" (UniqueName: \"kubernetes.io/projected/cd4b299f-9ab6-4714-b911-9b1e11708f39-kube-api-access-8kxhw\") pod \"memcached-0\" (UID: \"cd4b299f-9ab6-4714-b911-9b1e11708f39\") " pod="openstack/memcached-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.720033 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cd4b299f-9ab6-4714-b911-9b1e11708f39-kolla-config\") pod \"memcached-0\" (UID: \"cd4b299f-9ab6-4714-b911-9b1e11708f39\") " pod="openstack/memcached-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.720055 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd4b299f-9ab6-4714-b911-9b1e11708f39-combined-ca-bundle\") pod \"memcached-0\" (UID: \"cd4b299f-9ab6-4714-b911-9b1e11708f39\") " pod="openstack/memcached-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.720078 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cd4b299f-9ab6-4714-b911-9b1e11708f39-config-data\") pod \"memcached-0\" (UID: \"cd4b299f-9ab6-4714-b911-9b1e11708f39\") " pod="openstack/memcached-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.721738 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cd4b299f-9ab6-4714-b911-9b1e11708f39-kolla-config\") pod \"memcached-0\" (UID: \"cd4b299f-9ab6-4714-b911-9b1e11708f39\") " pod="openstack/memcached-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.725381 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cd4b299f-9ab6-4714-b911-9b1e11708f39-config-data\") pod \"memcached-0\" (UID: \"cd4b299f-9ab6-4714-b911-9b1e11708f39\") " pod="openstack/memcached-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.726855 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd4b299f-9ab6-4714-b911-9b1e11708f39-combined-ca-bundle\") pod \"memcached-0\" (UID: \"cd4b299f-9ab6-4714-b911-9b1e11708f39\") " pod="openstack/memcached-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.728151 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd4b299f-9ab6-4714-b911-9b1e11708f39-memcached-tls-certs\") pod \"memcached-0\" (UID: \"cd4b299f-9ab6-4714-b911-9b1e11708f39\") " pod="openstack/memcached-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.748663 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kxhw\" (UniqueName: \"kubernetes.io/projected/cd4b299f-9ab6-4714-b911-9b1e11708f39-kube-api-access-8kxhw\") pod \"memcached-0\" (UID: \"cd4b299f-9ab6-4714-b911-9b1e11708f39\") " pod="openstack/memcached-0" Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.796043 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 05 01:30:33 crc kubenswrapper[4990]: W1205 01:30:33.808226 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c281c58_a95e_4669_bdfc_465759817928.slice/crio-5a29c412dac5423a08ae9fce6d1087baa1761f3318624a0bbfc2adcf8be0a81b WatchSource:0}: Error finding container 5a29c412dac5423a08ae9fce6d1087baa1761f3318624a0bbfc2adcf8be0a81b: Status 404 returned error can't find the container with id 5a29c412dac5423a08ae9fce6d1087baa1761f3318624a0bbfc2adcf8be0a81b Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.891508 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2c281c58-a95e-4669-bdfc-465759817928","Type":"ContainerStarted","Data":"5a29c412dac5423a08ae9fce6d1087baa1761f3318624a0bbfc2adcf8be0a81b"} Dec 05 01:30:33 crc kubenswrapper[4990]: I1205 01:30:33.899986 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 05 01:30:34 crc kubenswrapper[4990]: I1205 01:30:34.132098 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 05 01:30:34 crc kubenswrapper[4990]: W1205 01:30:34.133189 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd4b299f_9ab6_4714_b911_9b1e11708f39.slice/crio-6f449a97865ee5ddd5585e5ec55ce64c49a49a43bfd44e17528c85dcce29ecf7 WatchSource:0}: Error finding container 6f449a97865ee5ddd5585e5ec55ce64c49a49a43bfd44e17528c85dcce29ecf7: Status 404 returned error can't find the container with id 6f449a97865ee5ddd5585e5ec55ce64c49a49a43bfd44e17528c85dcce29ecf7 Dec 05 01:30:34 crc kubenswrapper[4990]: I1205 01:30:34.911700 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"cd4b299f-9ab6-4714-b911-9b1e11708f39","Type":"ContainerStarted","Data":"6f449a97865ee5ddd5585e5ec55ce64c49a49a43bfd44e17528c85dcce29ecf7"} Dec 05 01:30:35 crc kubenswrapper[4990]: I1205 01:30:35.492077 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 01:30:35 crc kubenswrapper[4990]: I1205 01:30:35.493399 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 01:30:35 crc kubenswrapper[4990]: I1205 01:30:35.496191 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-xf6nr" Dec 05 01:30:35 crc kubenswrapper[4990]: I1205 01:30:35.510651 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 01:30:35 crc kubenswrapper[4990]: I1205 01:30:35.552570 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62497\" (UniqueName: \"kubernetes.io/projected/a53819fb-5ed6-4c06-8b50-9afd98a4ffb7-kube-api-access-62497\") pod \"kube-state-metrics-0\" (UID: \"a53819fb-5ed6-4c06-8b50-9afd98a4ffb7\") " pod="openstack/kube-state-metrics-0" Dec 05 01:30:35 crc kubenswrapper[4990]: I1205 01:30:35.655708 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62497\" (UniqueName: \"kubernetes.io/projected/a53819fb-5ed6-4c06-8b50-9afd98a4ffb7-kube-api-access-62497\") pod \"kube-state-metrics-0\" (UID: \"a53819fb-5ed6-4c06-8b50-9afd98a4ffb7\") " pod="openstack/kube-state-metrics-0" Dec 05 01:30:35 crc kubenswrapper[4990]: I1205 01:30:35.680804 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62497\" (UniqueName: \"kubernetes.io/projected/a53819fb-5ed6-4c06-8b50-9afd98a4ffb7-kube-api-access-62497\") pod \"kube-state-metrics-0\" (UID: \"a53819fb-5ed6-4c06-8b50-9afd98a4ffb7\") " pod="openstack/kube-state-metrics-0" Dec 05 01:30:35 crc kubenswrapper[4990]: I1205 01:30:35.824414 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 01:30:38 crc kubenswrapper[4990]: I1205 01:30:38.846430 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-nbpzw"] Dec 05 01:30:38 crc kubenswrapper[4990]: I1205 01:30:38.848448 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nbpzw" Dec 05 01:30:38 crc kubenswrapper[4990]: I1205 01:30:38.851591 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 05 01:30:38 crc kubenswrapper[4990]: I1205 01:30:38.851777 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-l8jdw" Dec 05 01:30:38 crc kubenswrapper[4990]: I1205 01:30:38.851946 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 05 01:30:38 crc kubenswrapper[4990]: I1205 01:30:38.856476 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nbpzw"] Dec 05 01:30:38 crc kubenswrapper[4990]: I1205 01:30:38.898567 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-2j9fb"] Dec 05 01:30:38 crc kubenswrapper[4990]: I1205 01:30:38.900236 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-2j9fb" Dec 05 01:30:38 crc kubenswrapper[4990]: I1205 01:30:38.906087 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-2j9fb"] Dec 05 01:30:38 crc kubenswrapper[4990]: I1205 01:30:38.916351 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d269e431-18be-4f4a-a63f-fee37cf08d46-var-log-ovn\") pod \"ovn-controller-nbpzw\" (UID: \"d269e431-18be-4f4a-a63f-fee37cf08d46\") " pod="openstack/ovn-controller-nbpzw" Dec 05 01:30:38 crc kubenswrapper[4990]: I1205 01:30:38.916438 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d269e431-18be-4f4a-a63f-fee37cf08d46-scripts\") pod \"ovn-controller-nbpzw\" (UID: \"d269e431-18be-4f4a-a63f-fee37cf08d46\") " pod="openstack/ovn-controller-nbpzw" Dec 05 01:30:38 crc kubenswrapper[4990]: I1205 01:30:38.916544 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtbw2\" (UniqueName: \"kubernetes.io/projected/d269e431-18be-4f4a-a63f-fee37cf08d46-kube-api-access-mtbw2\") pod \"ovn-controller-nbpzw\" (UID: \"d269e431-18be-4f4a-a63f-fee37cf08d46\") " pod="openstack/ovn-controller-nbpzw" Dec 05 01:30:38 crc kubenswrapper[4990]: I1205 01:30:38.916584 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d269e431-18be-4f4a-a63f-fee37cf08d46-combined-ca-bundle\") pod \"ovn-controller-nbpzw\" (UID: \"d269e431-18be-4f4a-a63f-fee37cf08d46\") " pod="openstack/ovn-controller-nbpzw" Dec 05 01:30:38 crc kubenswrapper[4990]: I1205 01:30:38.916761 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d269e431-18be-4f4a-a63f-fee37cf08d46-ovn-controller-tls-certs\") pod \"ovn-controller-nbpzw\" (UID: \"d269e431-18be-4f4a-a63f-fee37cf08d46\") " pod="openstack/ovn-controller-nbpzw" Dec 05 01:30:38 crc kubenswrapper[4990]: I1205 01:30:38.916800 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d269e431-18be-4f4a-a63f-fee37cf08d46-var-run\") pod \"ovn-controller-nbpzw\" (UID: \"d269e431-18be-4f4a-a63f-fee37cf08d46\") " pod="openstack/ovn-controller-nbpzw" Dec 05 01:30:38 crc kubenswrapper[4990]: I1205 01:30:38.916817 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d269e431-18be-4f4a-a63f-fee37cf08d46-var-run-ovn\") pod \"ovn-controller-nbpzw\" (UID: \"d269e431-18be-4f4a-a63f-fee37cf08d46\") " pod="openstack/ovn-controller-nbpzw" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.018313 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d833c1a0-9e88-4ad3-8bcc-5904d459903a-etc-ovs\") pod \"ovn-controller-ovs-2j9fb\" (UID: \"d833c1a0-9e88-4ad3-8bcc-5904d459903a\") " pod="openstack/ovn-controller-ovs-2j9fb" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.018370 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtbw2\" (UniqueName: \"kubernetes.io/projected/d269e431-18be-4f4a-a63f-fee37cf08d46-kube-api-access-mtbw2\") pod \"ovn-controller-nbpzw\" (UID: \"d269e431-18be-4f4a-a63f-fee37cf08d46\") " pod="openstack/ovn-controller-nbpzw" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.018405 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d269e431-18be-4f4a-a63f-fee37cf08d46-combined-ca-bundle\") pod \"ovn-controller-nbpzw\" (UID: \"d269e431-18be-4f4a-a63f-fee37cf08d46\") " pod="openstack/ovn-controller-nbpzw" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.018439 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d833c1a0-9e88-4ad3-8bcc-5904d459903a-var-lib\") pod \"ovn-controller-ovs-2j9fb\" (UID: \"d833c1a0-9e88-4ad3-8bcc-5904d459903a\") " pod="openstack/ovn-controller-ovs-2j9fb" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.018523 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d269e431-18be-4f4a-a63f-fee37cf08d46-ovn-controller-tls-certs\") pod \"ovn-controller-nbpzw\" (UID: \"d269e431-18be-4f4a-a63f-fee37cf08d46\") " pod="openstack/ovn-controller-nbpzw" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.018559 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d269e431-18be-4f4a-a63f-fee37cf08d46-var-run\") pod \"ovn-controller-nbpzw\" (UID: \"d269e431-18be-4f4a-a63f-fee37cf08d46\") " pod="openstack/ovn-controller-nbpzw" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.018581 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d269e431-18be-4f4a-a63f-fee37cf08d46-var-run-ovn\") pod \"ovn-controller-nbpzw\" (UID: \"d269e431-18be-4f4a-a63f-fee37cf08d46\") " pod="openstack/ovn-controller-nbpzw" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.018640 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d833c1a0-9e88-4ad3-8bcc-5904d459903a-var-log\") pod \"ovn-controller-ovs-2j9fb\" (UID: \"d833c1a0-9e88-4ad3-8bcc-5904d459903a\") " pod="openstack/ovn-controller-ovs-2j9fb" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.018656 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d833c1a0-9e88-4ad3-8bcc-5904d459903a-scripts\") pod \"ovn-controller-ovs-2j9fb\" (UID: \"d833c1a0-9e88-4ad3-8bcc-5904d459903a\") " pod="openstack/ovn-controller-ovs-2j9fb" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.018673 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d269e431-18be-4f4a-a63f-fee37cf08d46-var-log-ovn\") pod \"ovn-controller-nbpzw\" (UID: \"d269e431-18be-4f4a-a63f-fee37cf08d46\") " pod="openstack/ovn-controller-nbpzw" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.018777 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d833c1a0-9e88-4ad3-8bcc-5904d459903a-var-run\") pod \"ovn-controller-ovs-2j9fb\" (UID: \"d833c1a0-9e88-4ad3-8bcc-5904d459903a\") " pod="openstack/ovn-controller-ovs-2j9fb" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.019241 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d269e431-18be-4f4a-a63f-fee37cf08d46-var-log-ovn\") pod \"ovn-controller-nbpzw\" (UID: \"d269e431-18be-4f4a-a63f-fee37cf08d46\") " pod="openstack/ovn-controller-nbpzw" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.019290 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d269e431-18be-4f4a-a63f-fee37cf08d46-scripts\") pod \"ovn-controller-nbpzw\" (UID: \"d269e431-18be-4f4a-a63f-fee37cf08d46\") " pod="openstack/ovn-controller-nbpzw" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.019369 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d269e431-18be-4f4a-a63f-fee37cf08d46-var-run-ovn\") pod \"ovn-controller-nbpzw\" (UID: \"d269e431-18be-4f4a-a63f-fee37cf08d46\") " pod="openstack/ovn-controller-nbpzw" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.019392 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd6rq\" (UniqueName: \"kubernetes.io/projected/d833c1a0-9e88-4ad3-8bcc-5904d459903a-kube-api-access-pd6rq\") pod \"ovn-controller-ovs-2j9fb\" (UID: \"d833c1a0-9e88-4ad3-8bcc-5904d459903a\") " pod="openstack/ovn-controller-ovs-2j9fb" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.019862 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d269e431-18be-4f4a-a63f-fee37cf08d46-var-run\") pod \"ovn-controller-nbpzw\" (UID: \"d269e431-18be-4f4a-a63f-fee37cf08d46\") " pod="openstack/ovn-controller-nbpzw" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.024470 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d269e431-18be-4f4a-a63f-fee37cf08d46-scripts\") pod \"ovn-controller-nbpzw\" (UID: \"d269e431-18be-4f4a-a63f-fee37cf08d46\") " pod="openstack/ovn-controller-nbpzw" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.030440 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d269e431-18be-4f4a-a63f-fee37cf08d46-combined-ca-bundle\") pod \"ovn-controller-nbpzw\" (UID: \"d269e431-18be-4f4a-a63f-fee37cf08d46\") " pod="openstack/ovn-controller-nbpzw" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.032898 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d269e431-18be-4f4a-a63f-fee37cf08d46-ovn-controller-tls-certs\") pod \"ovn-controller-nbpzw\" (UID: \"d269e431-18be-4f4a-a63f-fee37cf08d46\") " pod="openstack/ovn-controller-nbpzw" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.035546 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtbw2\" (UniqueName: \"kubernetes.io/projected/d269e431-18be-4f4a-a63f-fee37cf08d46-kube-api-access-mtbw2\") pod \"ovn-controller-nbpzw\" (UID: \"d269e431-18be-4f4a-a63f-fee37cf08d46\") " pod="openstack/ovn-controller-nbpzw" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.120648 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d833c1a0-9e88-4ad3-8bcc-5904d459903a-var-run\") pod \"ovn-controller-ovs-2j9fb\" (UID: \"d833c1a0-9e88-4ad3-8bcc-5904d459903a\") " pod="openstack/ovn-controller-ovs-2j9fb" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.120718 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd6rq\" (UniqueName: \"kubernetes.io/projected/d833c1a0-9e88-4ad3-8bcc-5904d459903a-kube-api-access-pd6rq\") pod \"ovn-controller-ovs-2j9fb\" (UID: \"d833c1a0-9e88-4ad3-8bcc-5904d459903a\") " pod="openstack/ovn-controller-ovs-2j9fb" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.120745 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d833c1a0-9e88-4ad3-8bcc-5904d459903a-etc-ovs\") pod \"ovn-controller-ovs-2j9fb\" (UID: \"d833c1a0-9e88-4ad3-8bcc-5904d459903a\") " pod="openstack/ovn-controller-ovs-2j9fb" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.120779 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d833c1a0-9e88-4ad3-8bcc-5904d459903a-var-lib\") pod \"ovn-controller-ovs-2j9fb\" (UID: \"d833c1a0-9e88-4ad3-8bcc-5904d459903a\") " pod="openstack/ovn-controller-ovs-2j9fb" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.120864 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d833c1a0-9e88-4ad3-8bcc-5904d459903a-var-log\") pod \"ovn-controller-ovs-2j9fb\" (UID: \"d833c1a0-9e88-4ad3-8bcc-5904d459903a\") " pod="openstack/ovn-controller-ovs-2j9fb" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.120880 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d833c1a0-9e88-4ad3-8bcc-5904d459903a-scripts\") pod \"ovn-controller-ovs-2j9fb\" (UID: \"d833c1a0-9e88-4ad3-8bcc-5904d459903a\") " pod="openstack/ovn-controller-ovs-2j9fb" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.122615 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d833c1a0-9e88-4ad3-8bcc-5904d459903a-scripts\") pod \"ovn-controller-ovs-2j9fb\" (UID: \"d833c1a0-9e88-4ad3-8bcc-5904d459903a\") " pod="openstack/ovn-controller-ovs-2j9fb" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.122754 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d833c1a0-9e88-4ad3-8bcc-5904d459903a-var-run\") pod \"ovn-controller-ovs-2j9fb\" (UID: \"d833c1a0-9e88-4ad3-8bcc-5904d459903a\") " pod="openstack/ovn-controller-ovs-2j9fb" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.123085 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d833c1a0-9e88-4ad3-8bcc-5904d459903a-etc-ovs\") pod \"ovn-controller-ovs-2j9fb\" (UID: \"d833c1a0-9e88-4ad3-8bcc-5904d459903a\") " pod="openstack/ovn-controller-ovs-2j9fb" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.123263 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d833c1a0-9e88-4ad3-8bcc-5904d459903a-var-lib\") pod \"ovn-controller-ovs-2j9fb\" (UID: \"d833c1a0-9e88-4ad3-8bcc-5904d459903a\") " pod="openstack/ovn-controller-ovs-2j9fb" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.123384 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d833c1a0-9e88-4ad3-8bcc-5904d459903a-var-log\") pod \"ovn-controller-ovs-2j9fb\" (UID: \"d833c1a0-9e88-4ad3-8bcc-5904d459903a\") " pod="openstack/ovn-controller-ovs-2j9fb" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.145660 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd6rq\" (UniqueName: \"kubernetes.io/projected/d833c1a0-9e88-4ad3-8bcc-5904d459903a-kube-api-access-pd6rq\") pod \"ovn-controller-ovs-2j9fb\" (UID: \"d833c1a0-9e88-4ad3-8bcc-5904d459903a\") " pod="openstack/ovn-controller-ovs-2j9fb" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.170252 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nbpzw" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.221939 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-2j9fb" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.746647 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.748965 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.751037 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.751105 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.751917 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.752647 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-tb75x" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.752780 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.753839 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.840186 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c137d1b-6433-40ac-8036-84313eef1967-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7c137d1b-6433-40ac-8036-84313eef1967\") " pod="openstack/ovsdbserver-nb-0" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.840241 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c137d1b-6433-40ac-8036-84313eef1967-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7c137d1b-6433-40ac-8036-84313eef1967\") " pod="openstack/ovsdbserver-nb-0" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.840266 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7c137d1b-6433-40ac-8036-84313eef1967-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7c137d1b-6433-40ac-8036-84313eef1967\") " pod="openstack/ovsdbserver-nb-0" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.840290 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c137d1b-6433-40ac-8036-84313eef1967-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7c137d1b-6433-40ac-8036-84313eef1967\") " pod="openstack/ovsdbserver-nb-0" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.840308 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7c137d1b-6433-40ac-8036-84313eef1967\") " pod="openstack/ovsdbserver-nb-0" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.840326 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c137d1b-6433-40ac-8036-84313eef1967-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7c137d1b-6433-40ac-8036-84313eef1967\") " pod="openstack/ovsdbserver-nb-0" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.840376 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c137d1b-6433-40ac-8036-84313eef1967-config\") pod \"ovsdbserver-nb-0\" (UID: \"7c137d1b-6433-40ac-8036-84313eef1967\") " pod="openstack/ovsdbserver-nb-0" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.840404 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf7v7\" (UniqueName: \"kubernetes.io/projected/7c137d1b-6433-40ac-8036-84313eef1967-kube-api-access-wf7v7\") pod \"ovsdbserver-nb-0\" (UID: \"7c137d1b-6433-40ac-8036-84313eef1967\") " pod="openstack/ovsdbserver-nb-0" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.942231 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf7v7\" (UniqueName: \"kubernetes.io/projected/7c137d1b-6433-40ac-8036-84313eef1967-kube-api-access-wf7v7\") pod \"ovsdbserver-nb-0\" (UID: \"7c137d1b-6433-40ac-8036-84313eef1967\") " pod="openstack/ovsdbserver-nb-0" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.942291 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c137d1b-6433-40ac-8036-84313eef1967-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7c137d1b-6433-40ac-8036-84313eef1967\") " pod="openstack/ovsdbserver-nb-0" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.942327 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c137d1b-6433-40ac-8036-84313eef1967-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7c137d1b-6433-40ac-8036-84313eef1967\") " pod="openstack/ovsdbserver-nb-0" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.942358 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7c137d1b-6433-40ac-8036-84313eef1967-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7c137d1b-6433-40ac-8036-84313eef1967\") " pod="openstack/ovsdbserver-nb-0" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.942379 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c137d1b-6433-40ac-8036-84313eef1967-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7c137d1b-6433-40ac-8036-84313eef1967\") " pod="openstack/ovsdbserver-nb-0" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.942400 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7c137d1b-6433-40ac-8036-84313eef1967\") " pod="openstack/ovsdbserver-nb-0" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.942416 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c137d1b-6433-40ac-8036-84313eef1967-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7c137d1b-6433-40ac-8036-84313eef1967\") " pod="openstack/ovsdbserver-nb-0" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.942459 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c137d1b-6433-40ac-8036-84313eef1967-config\") pod \"ovsdbserver-nb-0\" (UID: \"7c137d1b-6433-40ac-8036-84313eef1967\") " pod="openstack/ovsdbserver-nb-0" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.942884 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7c137d1b-6433-40ac-8036-84313eef1967-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7c137d1b-6433-40ac-8036-84313eef1967\") " pod="openstack/ovsdbserver-nb-0" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.942924 4990 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7c137d1b-6433-40ac-8036-84313eef1967\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.943315 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c137d1b-6433-40ac-8036-84313eef1967-config\") pod \"ovsdbserver-nb-0\" (UID: \"7c137d1b-6433-40ac-8036-84313eef1967\") " pod="openstack/ovsdbserver-nb-0" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.943524 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c137d1b-6433-40ac-8036-84313eef1967-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7c137d1b-6433-40ac-8036-84313eef1967\") " pod="openstack/ovsdbserver-nb-0" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.952140 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c137d1b-6433-40ac-8036-84313eef1967-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7c137d1b-6433-40ac-8036-84313eef1967\") " pod="openstack/ovsdbserver-nb-0" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.952227 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c137d1b-6433-40ac-8036-84313eef1967-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7c137d1b-6433-40ac-8036-84313eef1967\") " pod="openstack/ovsdbserver-nb-0" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.954815 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c137d1b-6433-40ac-8036-84313eef1967-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7c137d1b-6433-40ac-8036-84313eef1967\") " pod="openstack/ovsdbserver-nb-0" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.958199 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf7v7\" (UniqueName: \"kubernetes.io/projected/7c137d1b-6433-40ac-8036-84313eef1967-kube-api-access-wf7v7\") pod \"ovsdbserver-nb-0\" (UID: \"7c137d1b-6433-40ac-8036-84313eef1967\") " pod="openstack/ovsdbserver-nb-0" Dec 05 01:30:39 crc kubenswrapper[4990]: I1205 01:30:39.964727 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7c137d1b-6433-40ac-8036-84313eef1967\") " pod="openstack/ovsdbserver-nb-0" Dec 05 01:30:40 crc kubenswrapper[4990]: I1205 01:30:40.079920 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 05 01:30:43 crc kubenswrapper[4990]: I1205 01:30:43.135118 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 05 01:30:43 crc kubenswrapper[4990]: I1205 01:30:43.137412 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 05 01:30:43 crc kubenswrapper[4990]: I1205 01:30:43.140372 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-jbnlp" Dec 05 01:30:43 crc kubenswrapper[4990]: I1205 01:30:43.142110 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 05 01:30:43 crc kubenswrapper[4990]: I1205 01:30:43.142238 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 05 01:30:43 crc kubenswrapper[4990]: I1205 01:30:43.142858 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 05 01:30:43 crc kubenswrapper[4990]: I1205 01:30:43.158977 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 05 01:30:43 crc kubenswrapper[4990]: I1205 01:30:43.196334 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4475723-8c01-483c-991d-d686c6361021-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f4475723-8c01-483c-991d-d686c6361021\") " pod="openstack/ovsdbserver-sb-0" Dec 05 01:30:43 crc kubenswrapper[4990]: I1205 01:30:43.196413 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4475723-8c01-483c-991d-d686c6361021-config\") pod \"ovsdbserver-sb-0\" (UID: \"f4475723-8c01-483c-991d-d686c6361021\") " pod="openstack/ovsdbserver-sb-0" Dec 05 01:30:43 crc kubenswrapper[4990]: I1205 01:30:43.196527 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4475723-8c01-483c-991d-d686c6361021-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f4475723-8c01-483c-991d-d686c6361021\") " pod="openstack/ovsdbserver-sb-0" Dec 05 01:30:43 crc kubenswrapper[4990]: I1205 01:30:43.196641 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4475723-8c01-483c-991d-d686c6361021-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f4475723-8c01-483c-991d-d686c6361021\") " pod="openstack/ovsdbserver-sb-0" Dec 05 01:30:43 crc kubenswrapper[4990]: I1205 01:30:43.196702 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4475723-8c01-483c-991d-d686c6361021-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f4475723-8c01-483c-991d-d686c6361021\") " pod="openstack/ovsdbserver-sb-0" Dec 05 01:30:43 crc kubenswrapper[4990]: I1205 01:30:43.196738 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4475723-8c01-483c-991d-d686c6361021-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f4475723-8c01-483c-991d-d686c6361021\") " pod="openstack/ovsdbserver-sb-0" Dec 05 01:30:43 crc kubenswrapper[4990]: I1205 01:30:43.196940 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f4475723-8c01-483c-991d-d686c6361021\") " pod="openstack/ovsdbserver-sb-0" Dec 05 01:30:43 crc kubenswrapper[4990]: I1205 01:30:43.197036 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhw4z\" (UniqueName: \"kubernetes.io/projected/f4475723-8c01-483c-991d-d686c6361021-kube-api-access-mhw4z\") pod \"ovsdbserver-sb-0\" (UID: \"f4475723-8c01-483c-991d-d686c6361021\") " pod="openstack/ovsdbserver-sb-0" Dec 05 01:30:43 crc kubenswrapper[4990]: I1205 01:30:43.299929 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4475723-8c01-483c-991d-d686c6361021-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f4475723-8c01-483c-991d-d686c6361021\") " pod="openstack/ovsdbserver-sb-0" Dec 05 01:30:43 crc kubenswrapper[4990]: I1205 01:30:43.299984 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4475723-8c01-483c-991d-d686c6361021-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f4475723-8c01-483c-991d-d686c6361021\") " pod="openstack/ovsdbserver-sb-0" Dec 05 01:30:43 crc kubenswrapper[4990]: I1205 01:30:43.300021 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4475723-8c01-483c-991d-d686c6361021-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f4475723-8c01-483c-991d-d686c6361021\") " pod="openstack/ovsdbserver-sb-0" Dec 05 01:30:43 crc kubenswrapper[4990]: I1205 01:30:43.300048 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4475723-8c01-483c-991d-d686c6361021-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f4475723-8c01-483c-991d-d686c6361021\") " pod="openstack/ovsdbserver-sb-0" Dec 05 01:30:43 crc kubenswrapper[4990]: I1205 01:30:43.300075 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f4475723-8c01-483c-991d-d686c6361021\") " pod="openstack/ovsdbserver-sb-0" Dec 05 01:30:43 crc kubenswrapper[4990]: I1205 01:30:43.300113 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhw4z\" (UniqueName: \"kubernetes.io/projected/f4475723-8c01-483c-991d-d686c6361021-kube-api-access-mhw4z\") pod \"ovsdbserver-sb-0\" (UID: \"f4475723-8c01-483c-991d-d686c6361021\") " pod="openstack/ovsdbserver-sb-0" Dec 05 01:30:43 crc kubenswrapper[4990]: I1205 01:30:43.300230 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4475723-8c01-483c-991d-d686c6361021-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f4475723-8c01-483c-991d-d686c6361021\") " pod="openstack/ovsdbserver-sb-0" Dec 05 01:30:43 crc kubenswrapper[4990]: I1205 01:30:43.300593 4990 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f4475723-8c01-483c-991d-d686c6361021\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-sb-0" Dec 05 01:30:43 crc kubenswrapper[4990]: I1205 01:30:43.301016 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4475723-8c01-483c-991d-d686c6361021-config\") pod \"ovsdbserver-sb-0\" (UID: \"f4475723-8c01-483c-991d-d686c6361021\") " pod="openstack/ovsdbserver-sb-0" Dec 05 01:30:43 crc kubenswrapper[4990]: I1205 01:30:43.301282 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4475723-8c01-483c-991d-d686c6361021-config\") pod \"ovsdbserver-sb-0\" (UID: \"f4475723-8c01-483c-991d-d686c6361021\") " pod="openstack/ovsdbserver-sb-0" Dec 05 01:30:43 crc kubenswrapper[4990]: I1205 01:30:43.301420 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4475723-8c01-483c-991d-d686c6361021-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f4475723-8c01-483c-991d-d686c6361021\") " pod="openstack/ovsdbserver-sb-0" Dec 05 01:30:43 crc kubenswrapper[4990]: I1205 01:30:43.301508 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4475723-8c01-483c-991d-d686c6361021-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f4475723-8c01-483c-991d-d686c6361021\") " pod="openstack/ovsdbserver-sb-0" Dec 05 01:30:43 crc kubenswrapper[4990]: I1205 01:30:43.307251 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4475723-8c01-483c-991d-d686c6361021-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f4475723-8c01-483c-991d-d686c6361021\") " pod="openstack/ovsdbserver-sb-0" Dec 05 01:30:43 crc kubenswrapper[4990]: I1205 01:30:43.308724 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4475723-8c01-483c-991d-d686c6361021-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f4475723-8c01-483c-991d-d686c6361021\") " pod="openstack/ovsdbserver-sb-0" Dec 05 01:30:43 crc kubenswrapper[4990]: I1205 01:30:43.313659 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4475723-8c01-483c-991d-d686c6361021-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f4475723-8c01-483c-991d-d686c6361021\") " pod="openstack/ovsdbserver-sb-0" Dec 05 01:30:43 crc kubenswrapper[4990]: I1205 01:30:43.318242 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhw4z\" (UniqueName: \"kubernetes.io/projected/f4475723-8c01-483c-991d-d686c6361021-kube-api-access-mhw4z\") pod \"ovsdbserver-sb-0\" (UID: \"f4475723-8c01-483c-991d-d686c6361021\") " pod="openstack/ovsdbserver-sb-0" Dec 05 01:30:43 crc kubenswrapper[4990]: I1205 01:30:43.343800 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f4475723-8c01-483c-991d-d686c6361021\") " pod="openstack/ovsdbserver-sb-0" Dec 05 01:30:43 crc kubenswrapper[4990]: I1205 01:30:43.457906 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 05 01:30:48 crc kubenswrapper[4990]: I1205 01:30:48.207999 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 01:30:49 crc kubenswrapper[4990]: W1205 01:30:49.166582 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda53819fb_5ed6_4c06_8b50_9afd98a4ffb7.slice/crio-cb9b495d9c8a43cb121cc560e4107ab77fc108c9c0086bd6c51a745f8475e299 WatchSource:0}: Error finding container cb9b495d9c8a43cb121cc560e4107ab77fc108c9c0086bd6c51a745f8475e299: Status 404 returned error can't find the container with id cb9b495d9c8a43cb121cc560e4107ab77fc108c9c0086bd6c51a745f8475e299 Dec 05 01:30:49 crc kubenswrapper[4990]: E1205 01:30:49.191058 4990 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 05 01:30:49 crc kubenswrapper[4990]: E1205 01:30:49.191093 4990 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 05 01:30:49 crc kubenswrapper[4990]: E1205 01:30:49.191201 4990 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d2vsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-n9x8x_openstack(3370eea3-2944-41fe-8572-1b01dd5f39fa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 01:30:49 crc kubenswrapper[4990]: E1205 01:30:49.191265 4990 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8whmd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-csbdn_openstack(e6b0fdaa-52c8-4f83-8de1-4c6fee175ee3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 01:30:49 crc kubenswrapper[4990]: E1205 01:30:49.192264 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-n9x8x" podUID="3370eea3-2944-41fe-8572-1b01dd5f39fa" Dec 05 01:30:49 crc kubenswrapper[4990]: E1205 01:30:49.192408 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-csbdn" podUID="e6b0fdaa-52c8-4f83-8de1-4c6fee175ee3" Dec 05 01:30:49 crc kubenswrapper[4990]: I1205 01:30:49.751190 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nbpzw"] Dec 05 01:30:49 crc kubenswrapper[4990]: I1205 01:30:49.845615 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 05 01:30:49 crc kubenswrapper[4990]: I1205 01:30:49.973447 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 05 01:30:50 crc kubenswrapper[4990]: I1205 01:30:50.042573 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nbpzw" event={"ID":"d269e431-18be-4f4a-a63f-fee37cf08d46","Type":"ContainerStarted","Data":"4d7e5f2ffb0fffe4802f5f2db67dd63212cdb0ef605c28834e87bfe10bde9595"} Dec 05 01:30:50 crc kubenswrapper[4990]: I1205 01:30:50.044402 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2c281c58-a95e-4669-bdfc-465759817928","Type":"ContainerStarted","Data":"2cf0fa6b65b48acaf9fa9180f44998e45eea4e56bcbf9a49157e844415633e4c"} Dec 05 01:30:50 crc kubenswrapper[4990]: I1205 01:30:50.047215 4990 generic.go:334] "Generic (PLEG): container finished" podID="d858e3af-1688-433c-ad50-09aef898edda" containerID="8be3e28f3162c551353b7a0b7e95263f298da1bd646be5e870b2780bd7bebd77" exitCode=0 Dec 05 01:30:50 crc kubenswrapper[4990]: I1205 01:30:50.047636 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7rldr" event={"ID":"d858e3af-1688-433c-ad50-09aef898edda","Type":"ContainerDied","Data":"8be3e28f3162c551353b7a0b7e95263f298da1bd646be5e870b2780bd7bebd77"} Dec 05 01:30:50 crc kubenswrapper[4990]: I1205 01:30:50.049576 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"cd4b299f-9ab6-4714-b911-9b1e11708f39","Type":"ContainerStarted","Data":"1337738b96b97c494ef162bac005232edc2ea2057d25e1bca729e2912f0fc44b"} Dec 05 01:30:50 crc kubenswrapper[4990]: I1205 01:30:50.049741 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 05 01:30:50 crc kubenswrapper[4990]: I1205 01:30:50.052112 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"00beb76a-d4d2-4cd8-bc04-e268c2397388","Type":"ContainerStarted","Data":"eea0a9d2df646aa4abcbe7cfcbbe171d69217dd0f047493b8794dd18a5edc8c6"} Dec 05 01:30:50 crc kubenswrapper[4990]: I1205 01:30:50.055007 4990 generic.go:334] "Generic (PLEG): container finished" podID="d59839fd-b60f-4cb2-bfb7-166ad40576f2" containerID="a94afd469f5b53e9a5f675fa56dd43328148fe5a340a5b3a4e73d7e4025d4623" exitCode=0 Dec 05 01:30:50 crc kubenswrapper[4990]: I1205 01:30:50.055144 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-qv2fl" event={"ID":"d59839fd-b60f-4cb2-bfb7-166ad40576f2","Type":"ContainerDied","Data":"a94afd469f5b53e9a5f675fa56dd43328148fe5a340a5b3a4e73d7e4025d4623"} Dec 05 01:30:50 crc kubenswrapper[4990]: I1205 01:30:50.057184 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a53819fb-5ed6-4c06-8b50-9afd98a4ffb7","Type":"ContainerStarted","Data":"cb9b495d9c8a43cb121cc560e4107ab77fc108c9c0086bd6c51a745f8475e299"} Dec 05 01:30:50 crc kubenswrapper[4990]: W1205 01:30:50.087890 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c137d1b_6433_40ac_8036_84313eef1967.slice/crio-11f37c0f8fa015d1e9a5a18a06886135da7bf7e528eaaa463e30429c118ec287 WatchSource:0}: Error finding container 11f37c0f8fa015d1e9a5a18a06886135da7bf7e528eaaa463e30429c118ec287: Status 404 returned error can't find the container with id 11f37c0f8fa015d1e9a5a18a06886135da7bf7e528eaaa463e30429c118ec287 Dec 05 01:30:50 crc kubenswrapper[4990]: W1205 01:30:50.090343 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4475723_8c01_483c_991d_d686c6361021.slice/crio-2fc8da6d992067e98eea1fc769162eea49a5a05ef7ee6c0e3a06815f331f4f96 WatchSource:0}: Error finding container 2fc8da6d992067e98eea1fc769162eea49a5a05ef7ee6c0e3a06815f331f4f96: Status 404 returned error can't find the container with id 2fc8da6d992067e98eea1fc769162eea49a5a05ef7ee6c0e3a06815f331f4f96 Dec 05 01:30:50 crc kubenswrapper[4990]: I1205 01:30:50.192264 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.041918899 podStartE2EDuration="17.19224308s" podCreationTimestamp="2025-12-05 01:30:33 +0000 UTC" firstStartedPulling="2025-12-05 01:30:34.136146748 +0000 UTC m=+1332.512362109" lastFinishedPulling="2025-12-05 01:30:49.286470929 +0000 UTC m=+1347.662686290" observedRunningTime="2025-12-05 01:30:50.186511857 +0000 UTC m=+1348.562727218" watchObservedRunningTime="2025-12-05 01:30:50.19224308 +0000 UTC m=+1348.568458441" Dec 05 01:30:50 crc kubenswrapper[4990]: I1205 01:30:50.375908 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-2j9fb"] Dec 05 01:30:50 crc kubenswrapper[4990]: I1205 01:30:50.731904 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-csbdn" Dec 05 01:30:50 crc kubenswrapper[4990]: I1205 01:30:50.742514 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-n9x8x" Dec 05 01:30:50 crc kubenswrapper[4990]: I1205 01:30:50.841862 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3370eea3-2944-41fe-8572-1b01dd5f39fa-dns-svc\") pod \"3370eea3-2944-41fe-8572-1b01dd5f39fa\" (UID: \"3370eea3-2944-41fe-8572-1b01dd5f39fa\") " Dec 05 01:30:50 crc kubenswrapper[4990]: I1205 01:30:50.841915 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3370eea3-2944-41fe-8572-1b01dd5f39fa-config\") pod \"3370eea3-2944-41fe-8572-1b01dd5f39fa\" (UID: \"3370eea3-2944-41fe-8572-1b01dd5f39fa\") " Dec 05 01:30:50 crc kubenswrapper[4990]: I1205 01:30:50.841953 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6b0fdaa-52c8-4f83-8de1-4c6fee175ee3-config\") pod \"e6b0fdaa-52c8-4f83-8de1-4c6fee175ee3\" (UID: \"e6b0fdaa-52c8-4f83-8de1-4c6fee175ee3\") " Dec 05 01:30:50 crc kubenswrapper[4990]: I1205 01:30:50.841983 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2vsf\" (UniqueName: \"kubernetes.io/projected/3370eea3-2944-41fe-8572-1b01dd5f39fa-kube-api-access-d2vsf\") pod \"3370eea3-2944-41fe-8572-1b01dd5f39fa\" (UID: \"3370eea3-2944-41fe-8572-1b01dd5f39fa\") " Dec 05 01:30:50 crc kubenswrapper[4990]: I1205 01:30:50.842043 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8whmd\" (UniqueName: \"kubernetes.io/projected/e6b0fdaa-52c8-4f83-8de1-4c6fee175ee3-kube-api-access-8whmd\") pod \"e6b0fdaa-52c8-4f83-8de1-4c6fee175ee3\" (UID: \"e6b0fdaa-52c8-4f83-8de1-4c6fee175ee3\") " Dec 05 01:30:50 crc kubenswrapper[4990]: I1205 01:30:50.842558 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6b0fdaa-52c8-4f83-8de1-4c6fee175ee3-config" (OuterVolumeSpecName: "config") pod "e6b0fdaa-52c8-4f83-8de1-4c6fee175ee3" (UID: "e6b0fdaa-52c8-4f83-8de1-4c6fee175ee3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:30:50 crc kubenswrapper[4990]: I1205 01:30:50.842861 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3370eea3-2944-41fe-8572-1b01dd5f39fa-config" (OuterVolumeSpecName: "config") pod "3370eea3-2944-41fe-8572-1b01dd5f39fa" (UID: "3370eea3-2944-41fe-8572-1b01dd5f39fa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:30:50 crc kubenswrapper[4990]: I1205 01:30:50.842903 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3370eea3-2944-41fe-8572-1b01dd5f39fa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3370eea3-2944-41fe-8572-1b01dd5f39fa" (UID: "3370eea3-2944-41fe-8572-1b01dd5f39fa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:30:50 crc kubenswrapper[4990]: I1205 01:30:50.848358 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3370eea3-2944-41fe-8572-1b01dd5f39fa-kube-api-access-d2vsf" (OuterVolumeSpecName: "kube-api-access-d2vsf") pod "3370eea3-2944-41fe-8572-1b01dd5f39fa" (UID: "3370eea3-2944-41fe-8572-1b01dd5f39fa"). InnerVolumeSpecName "kube-api-access-d2vsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:30:50 crc kubenswrapper[4990]: I1205 01:30:50.848425 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6b0fdaa-52c8-4f83-8de1-4c6fee175ee3-kube-api-access-8whmd" (OuterVolumeSpecName: "kube-api-access-8whmd") pod "e6b0fdaa-52c8-4f83-8de1-4c6fee175ee3" (UID: "e6b0fdaa-52c8-4f83-8de1-4c6fee175ee3"). InnerVolumeSpecName "kube-api-access-8whmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:30:50 crc kubenswrapper[4990]: I1205 01:30:50.943092 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8whmd\" (UniqueName: \"kubernetes.io/projected/e6b0fdaa-52c8-4f83-8de1-4c6fee175ee3-kube-api-access-8whmd\") on node \"crc\" DevicePath \"\"" Dec 05 01:30:50 crc kubenswrapper[4990]: I1205 01:30:50.943125 4990 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3370eea3-2944-41fe-8572-1b01dd5f39fa-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 01:30:50 crc kubenswrapper[4990]: I1205 01:30:50.943136 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3370eea3-2944-41fe-8572-1b01dd5f39fa-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:30:50 crc kubenswrapper[4990]: I1205 01:30:50.943145 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6b0fdaa-52c8-4f83-8de1-4c6fee175ee3-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:30:50 crc kubenswrapper[4990]: I1205 01:30:50.943154 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2vsf\" (UniqueName: \"kubernetes.io/projected/3370eea3-2944-41fe-8572-1b01dd5f39fa-kube-api-access-d2vsf\") on node \"crc\" DevicePath \"\"" Dec 05 01:30:51 crc kubenswrapper[4990]: I1205 01:30:51.068345 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7c137d1b-6433-40ac-8036-84313eef1967","Type":"ContainerStarted","Data":"11f37c0f8fa015d1e9a5a18a06886135da7bf7e528eaaa463e30429c118ec287"} Dec 05 01:30:51 crc kubenswrapper[4990]: I1205 01:30:51.072523 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ed473a7a-f068-49a3-ae4c-b57b39e33b28","Type":"ContainerStarted","Data":"bf23153d47795f08a1beb8bebe5fd81358e0773080569922f18d9cf836a35d62"} Dec 05 01:30:51 crc kubenswrapper[4990]: I1205 01:30:51.077181 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"809c1920-3205-411c-a8c1-ed027b7e3b1f","Type":"ContainerStarted","Data":"5f5960287e71d7a833bacd70a7ad0510d80b6d222b74bb7c1aa36b55923710c9"} Dec 05 01:30:51 crc kubenswrapper[4990]: I1205 01:30:51.079717 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2j9fb" event={"ID":"d833c1a0-9e88-4ad3-8bcc-5904d459903a","Type":"ContainerStarted","Data":"794a157b8e4c29b8bb0adfe2d37dcdc8ecc2ee90d88651978b6c88fe0a5f4f3a"} Dec 05 01:30:51 crc kubenswrapper[4990]: I1205 01:30:51.080832 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f4475723-8c01-483c-991d-d686c6361021","Type":"ContainerStarted","Data":"2fc8da6d992067e98eea1fc769162eea49a5a05ef7ee6c0e3a06815f331f4f96"} Dec 05 01:30:51 crc kubenswrapper[4990]: I1205 01:30:51.082200 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-csbdn" event={"ID":"e6b0fdaa-52c8-4f83-8de1-4c6fee175ee3","Type":"ContainerDied","Data":"3a9fcef823f50fd7c860ba781de98dc83743e9b26e0cb10cceeaba86b6469e47"} Dec 05 01:30:51 crc kubenswrapper[4990]: I1205 01:30:51.082253 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-csbdn" Dec 05 01:30:51 crc kubenswrapper[4990]: I1205 01:30:51.084683 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-n9x8x" event={"ID":"3370eea3-2944-41fe-8572-1b01dd5f39fa","Type":"ContainerDied","Data":"73a96c20a83ba198efe6ba765e44f7c57aaee201b3bfa9450cc6a23796ab8ffa"} Dec 05 01:30:51 crc kubenswrapper[4990]: I1205 01:30:51.084744 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-n9x8x" Dec 05 01:30:51 crc kubenswrapper[4990]: I1205 01:30:51.163092 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-csbdn"] Dec 05 01:30:51 crc kubenswrapper[4990]: I1205 01:30:51.169551 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-csbdn"] Dec 05 01:30:51 crc kubenswrapper[4990]: I1205 01:30:51.188004 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-n9x8x"] Dec 05 01:30:51 crc kubenswrapper[4990]: I1205 01:30:51.193188 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-n9x8x"] Dec 05 01:30:51 crc kubenswrapper[4990]: I1205 01:30:51.943308 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3370eea3-2944-41fe-8572-1b01dd5f39fa" path="/var/lib/kubelet/pods/3370eea3-2944-41fe-8572-1b01dd5f39fa/volumes" Dec 05 01:30:51 crc kubenswrapper[4990]: I1205 01:30:51.944077 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6b0fdaa-52c8-4f83-8de1-4c6fee175ee3" path="/var/lib/kubelet/pods/e6b0fdaa-52c8-4f83-8de1-4c6fee175ee3/volumes" Dec 05 01:30:52 crc kubenswrapper[4990]: I1205 01:30:52.097686 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7rldr" event={"ID":"d858e3af-1688-433c-ad50-09aef898edda","Type":"ContainerStarted","Data":"c4b92a353200df6147ebf085b053f8ff1d30eaca827b511ab86b33a42bce83cf"} Dec 05 01:30:52 crc kubenswrapper[4990]: I1205 01:30:52.097947 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-7rldr" Dec 05 01:30:52 crc kubenswrapper[4990]: I1205 01:30:52.100324 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-qv2fl" event={"ID":"d59839fd-b60f-4cb2-bfb7-166ad40576f2","Type":"ContainerStarted","Data":"93c98f24840aff704e4595afe795207607331811deed7c78170c2569b8c2d970"} Dec 05 01:30:52 crc kubenswrapper[4990]: I1205 01:30:52.100475 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc8479f9-qv2fl" Dec 05 01:30:52 crc kubenswrapper[4990]: I1205 01:30:52.102589 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a53819fb-5ed6-4c06-8b50-9afd98a4ffb7","Type":"ContainerStarted","Data":"45eea5d4b0311c3b9c64fb460ad3ed9b839b260bb1790b291ea222832d4538a0"} Dec 05 01:30:52 crc kubenswrapper[4990]: I1205 01:30:52.117081 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-7rldr" podStartSLOduration=4.144742688 podStartE2EDuration="23.117044314s" podCreationTimestamp="2025-12-05 01:30:29 +0000 UTC" firstStartedPulling="2025-12-05 01:30:30.327962074 +0000 UTC m=+1328.704177435" lastFinishedPulling="2025-12-05 01:30:49.30026366 +0000 UTC m=+1347.676479061" observedRunningTime="2025-12-05 01:30:52.114232994 +0000 UTC m=+1350.490448355" watchObservedRunningTime="2025-12-05 01:30:52.117044314 +0000 UTC m=+1350.493259685" Dec 05 01:30:52 crc kubenswrapper[4990]: I1205 01:30:52.135276 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=14.6505687 podStartE2EDuration="17.135261771s" podCreationTimestamp="2025-12-05 01:30:35 +0000 UTC" firstStartedPulling="2025-12-05 01:30:49.202945809 +0000 UTC m=+1347.579161170" lastFinishedPulling="2025-12-05 01:30:51.68763888 +0000 UTC m=+1350.063854241" observedRunningTime="2025-12-05 01:30:52.132801921 +0000 UTC m=+1350.509017282" watchObservedRunningTime="2025-12-05 01:30:52.135261771 +0000 UTC m=+1350.511477132" Dec 05 01:30:52 crc kubenswrapper[4990]: I1205 01:30:52.155956 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc8479f9-qv2fl" podStartSLOduration=3.643058772 podStartE2EDuration="23.155939138s" podCreationTimestamp="2025-12-05 01:30:29 +0000 UTC" firstStartedPulling="2025-12-05 01:30:29.82880359 +0000 UTC m=+1328.205018951" lastFinishedPulling="2025-12-05 01:30:49.341683936 +0000 UTC m=+1347.717899317" observedRunningTime="2025-12-05 01:30:52.15038536 +0000 UTC m=+1350.526600721" watchObservedRunningTime="2025-12-05 01:30:52.155939138 +0000 UTC m=+1350.532154499" Dec 05 01:30:53 crc kubenswrapper[4990]: I1205 01:30:53.109421 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 05 01:30:54 crc kubenswrapper[4990]: I1205 01:30:54.122553 4990 generic.go:334] "Generic (PLEG): container finished" podID="2c281c58-a95e-4669-bdfc-465759817928" containerID="2cf0fa6b65b48acaf9fa9180f44998e45eea4e56bcbf9a49157e844415633e4c" exitCode=0 Dec 05 01:30:54 crc kubenswrapper[4990]: I1205 01:30:54.123020 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2c281c58-a95e-4669-bdfc-465759817928","Type":"ContainerDied","Data":"2cf0fa6b65b48acaf9fa9180f44998e45eea4e56bcbf9a49157e844415633e4c"} Dec 05 01:30:54 crc kubenswrapper[4990]: I1205 01:30:54.132072 4990 generic.go:334] "Generic (PLEG): container finished" podID="00beb76a-d4d2-4cd8-bc04-e268c2397388" containerID="eea0a9d2df646aa4abcbe7cfcbbe171d69217dd0f047493b8794dd18a5edc8c6" exitCode=0 Dec 05 01:30:54 crc kubenswrapper[4990]: I1205 01:30:54.132125 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"00beb76a-d4d2-4cd8-bc04-e268c2397388","Type":"ContainerDied","Data":"eea0a9d2df646aa4abcbe7cfcbbe171d69217dd0f047493b8794dd18a5edc8c6"} Dec 05 01:30:55 crc kubenswrapper[4990]: I1205 01:30:55.142427 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2c281c58-a95e-4669-bdfc-465759817928","Type":"ContainerStarted","Data":"d296b3f03577d11daa223c38effcb4c833eb5944cd5256ee247941d30a2772a5"} Dec 05 01:30:55 crc kubenswrapper[4990]: I1205 01:30:55.146373 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nbpzw" event={"ID":"d269e431-18be-4f4a-a63f-fee37cf08d46","Type":"ContainerStarted","Data":"ce37020e9ce769fcf273d3dc5584fc28db4503d6df9ad3d7d13fbe7900daa643"} Dec 05 01:30:55 crc kubenswrapper[4990]: I1205 01:30:55.146496 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-nbpzw" Dec 05 01:30:55 crc kubenswrapper[4990]: I1205 01:30:55.148342 4990 generic.go:334] "Generic (PLEG): container finished" podID="d833c1a0-9e88-4ad3-8bcc-5904d459903a" containerID="be6140227381e6af78101bb3622a3525501ee2117419771e4044f5d64097caae" exitCode=0 Dec 05 01:30:55 crc kubenswrapper[4990]: I1205 01:30:55.148386 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2j9fb" event={"ID":"d833c1a0-9e88-4ad3-8bcc-5904d459903a","Type":"ContainerDied","Data":"be6140227381e6af78101bb3622a3525501ee2117419771e4044f5d64097caae"} Dec 05 01:30:55 crc kubenswrapper[4990]: I1205 01:30:55.150436 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"00beb76a-d4d2-4cd8-bc04-e268c2397388","Type":"ContainerStarted","Data":"6290c870625c2bc24d2bbf7c61e3acaf0e8d3f3f7f3e22832b84d2f5bf16b234"} Dec 05 01:30:55 crc kubenswrapper[4990]: I1205 01:30:55.153176 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f4475723-8c01-483c-991d-d686c6361021","Type":"ContainerStarted","Data":"eba3c2e58a11f77331043a8b651d18ec1dbce27ca05b7e23324f354e0f09b319"} Dec 05 01:30:55 crc kubenswrapper[4990]: I1205 01:30:55.154857 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7c137d1b-6433-40ac-8036-84313eef1967","Type":"ContainerStarted","Data":"d669b98ffce9d4d7e245b409d79ed12266001924abaaf1749664c34bb9dcf1d8"} Dec 05 01:30:55 crc kubenswrapper[4990]: I1205 01:30:55.164605 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=7.674238667 podStartE2EDuration="23.164583766s" podCreationTimestamp="2025-12-05 01:30:32 +0000 UTC" firstStartedPulling="2025-12-05 01:30:33.810605341 +0000 UTC m=+1332.186820702" lastFinishedPulling="2025-12-05 01:30:49.30095044 +0000 UTC m=+1347.677165801" observedRunningTime="2025-12-05 01:30:55.163212087 +0000 UTC m=+1353.539427448" watchObservedRunningTime="2025-12-05 01:30:55.164583766 +0000 UTC m=+1353.540799127" Dec 05 01:30:55 crc kubenswrapper[4990]: I1205 01:30:55.214793 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-nbpzw" podStartSLOduration=12.904569722 podStartE2EDuration="17.2147657s" podCreationTimestamp="2025-12-05 01:30:38 +0000 UTC" firstStartedPulling="2025-12-05 01:30:49.766096998 +0000 UTC m=+1348.142312359" lastFinishedPulling="2025-12-05 01:30:54.076292976 +0000 UTC m=+1352.452508337" observedRunningTime="2025-12-05 01:30:55.198703154 +0000 UTC m=+1353.574918535" watchObservedRunningTime="2025-12-05 01:30:55.2147657 +0000 UTC m=+1353.590981061" Dec 05 01:30:55 crc kubenswrapper[4990]: I1205 01:30:55.221675 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=8.472031125 podStartE2EDuration="25.221650085s" podCreationTimestamp="2025-12-05 01:30:30 +0000 UTC" firstStartedPulling="2025-12-05 01:30:32.54746509 +0000 UTC m=+1330.923680451" lastFinishedPulling="2025-12-05 01:30:49.29708405 +0000 UTC m=+1347.673299411" observedRunningTime="2025-12-05 01:30:55.219952217 +0000 UTC m=+1353.596167578" watchObservedRunningTime="2025-12-05 01:30:55.221650085 +0000 UTC m=+1353.597865446" Dec 05 01:30:56 crc kubenswrapper[4990]: I1205 01:30:56.194749 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2j9fb" event={"ID":"d833c1a0-9e88-4ad3-8bcc-5904d459903a","Type":"ContainerStarted","Data":"88c86c95e39217d930a01ef924d917927e9b97fa3c53963b2fe430bae34fff01"} Dec 05 01:30:56 crc kubenswrapper[4990]: I1205 01:30:56.195183 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2j9fb" event={"ID":"d833c1a0-9e88-4ad3-8bcc-5904d459903a","Type":"ContainerStarted","Data":"edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940"} Dec 05 01:30:56 crc kubenswrapper[4990]: I1205 01:30:56.195316 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-2j9fb" Dec 05 01:30:56 crc kubenswrapper[4990]: I1205 01:30:56.195336 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-2j9fb" Dec 05 01:30:56 crc kubenswrapper[4990]: I1205 01:30:56.222727 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-2j9fb" podStartSLOduration=14.847963584 podStartE2EDuration="18.22270564s" podCreationTimestamp="2025-12-05 01:30:38 +0000 UTC" firstStartedPulling="2025-12-05 01:30:50.674958616 +0000 UTC m=+1349.051173977" lastFinishedPulling="2025-12-05 01:30:54.049700672 +0000 UTC m=+1352.425916033" observedRunningTime="2025-12-05 01:30:56.218363296 +0000 UTC m=+1354.594578687" watchObservedRunningTime="2025-12-05 01:30:56.22270564 +0000 UTC m=+1354.598921031" Dec 05 01:30:57 crc kubenswrapper[4990]: E1205 01:30:57.801358 4990 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.145:34084->38.102.83.145:35487: read tcp 38.102.83.145:34084->38.102.83.145:35487: read: connection reset by peer Dec 05 01:30:57 crc kubenswrapper[4990]: E1205 01:30:57.870240 4990 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.145:34100->38.102.83.145:35487: write tcp 38.102.83.145:34100->38.102.83.145:35487: write: broken pipe Dec 05 01:30:58 crc kubenswrapper[4990]: I1205 01:30:58.216526 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f4475723-8c01-483c-991d-d686c6361021","Type":"ContainerStarted","Data":"bad858e159bf07ff0d3caac7ac673c248bb0725f3fd9fe9254369591a53861cf"} Dec 05 01:30:58 crc kubenswrapper[4990]: I1205 01:30:58.219329 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7c137d1b-6433-40ac-8036-84313eef1967","Type":"ContainerStarted","Data":"31641ed3f43c47e4a2506c3dcbee8f9106ba3dd06dc5c1aae7406895b483339c"} Dec 05 01:30:58 crc kubenswrapper[4990]: I1205 01:30:58.255245 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=8.86539956 podStartE2EDuration="16.2552119s" podCreationTimestamp="2025-12-05 01:30:42 +0000 UTC" firstStartedPulling="2025-12-05 01:30:50.100476556 +0000 UTC m=+1348.476691927" lastFinishedPulling="2025-12-05 01:30:57.490288896 +0000 UTC m=+1355.866504267" observedRunningTime="2025-12-05 01:30:58.247339687 +0000 UTC m=+1356.623555058" watchObservedRunningTime="2025-12-05 01:30:58.2552119 +0000 UTC m=+1356.631427261" Dec 05 01:30:58 crc kubenswrapper[4990]: I1205 01:30:58.287421 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=12.906318272 podStartE2EDuration="20.287398674s" podCreationTimestamp="2025-12-05 01:30:38 +0000 UTC" firstStartedPulling="2025-12-05 01:30:50.09144462 +0000 UTC m=+1348.467659981" lastFinishedPulling="2025-12-05 01:30:57.472525012 +0000 UTC m=+1355.848740383" observedRunningTime="2025-12-05 01:30:58.280018094 +0000 UTC m=+1356.656233455" watchObservedRunningTime="2025-12-05 01:30:58.287398674 +0000 UTC m=+1356.663614045" Dec 05 01:30:58 crc kubenswrapper[4990]: I1205 01:30:58.458651 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 05 01:30:58 crc kubenswrapper[4990]: I1205 01:30:58.458720 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 05 01:30:58 crc kubenswrapper[4990]: I1205 01:30:58.502714 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 05 01:30:58 crc kubenswrapper[4990]: I1205 01:30:58.901612 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.285407 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.528512 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7rldr"] Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.529185 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-7rldr" podUID="d858e3af-1688-433c-ad50-09aef898edda" containerName="dnsmasq-dns" containerID="cri-o://c4b92a353200df6147ebf085b053f8ff1d30eaca827b511ab86b33a42bce83cf" gracePeriod=10 Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.531674 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-7rldr" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.536674 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ccc8479f9-qv2fl" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.581328 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-9vttm"] Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.582682 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-9vttm" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.585034 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.595243 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-gch4g"] Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.596293 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-gch4g" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.600730 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.622671 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-9vttm"] Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.627711 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-gch4g"] Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.699185 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7pvn\" (UniqueName: \"kubernetes.io/projected/1fab5ec1-e7da-4926-945f-526762ff5aa6-kube-api-access-p7pvn\") pod \"dnsmasq-dns-7f896c8c65-9vttm\" (UID: \"1fab5ec1-e7da-4926-945f-526762ff5aa6\") " pod="openstack/dnsmasq-dns-7f896c8c65-9vttm" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.699268 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fab5ec1-e7da-4926-945f-526762ff5aa6-config\") pod \"dnsmasq-dns-7f896c8c65-9vttm\" (UID: \"1fab5ec1-e7da-4926-945f-526762ff5aa6\") " pod="openstack/dnsmasq-dns-7f896c8c65-9vttm" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.699381 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fab5ec1-e7da-4926-945f-526762ff5aa6-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-9vttm\" (UID: \"1fab5ec1-e7da-4926-945f-526762ff5aa6\") " pod="openstack/dnsmasq-dns-7f896c8c65-9vttm" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.699690 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fab5ec1-e7da-4926-945f-526762ff5aa6-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-9vttm\" (UID: \"1fab5ec1-e7da-4926-945f-526762ff5aa6\") " pod="openstack/dnsmasq-dns-7f896c8c65-9vttm" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.699776 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/92e80556-5f2d-44ed-b165-3211fd50ad98-ovn-rundir\") pod \"ovn-controller-metrics-gch4g\" (UID: \"92e80556-5f2d-44ed-b165-3211fd50ad98\") " pod="openstack/ovn-controller-metrics-gch4g" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.699824 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92e80556-5f2d-44ed-b165-3211fd50ad98-combined-ca-bundle\") pod \"ovn-controller-metrics-gch4g\" (UID: \"92e80556-5f2d-44ed-b165-3211fd50ad98\") " pod="openstack/ovn-controller-metrics-gch4g" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.699846 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/92e80556-5f2d-44ed-b165-3211fd50ad98-ovs-rundir\") pod \"ovn-controller-metrics-gch4g\" (UID: \"92e80556-5f2d-44ed-b165-3211fd50ad98\") " pod="openstack/ovn-controller-metrics-gch4g" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.699890 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/92e80556-5f2d-44ed-b165-3211fd50ad98-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-gch4g\" (UID: \"92e80556-5f2d-44ed-b165-3211fd50ad98\") " pod="openstack/ovn-controller-metrics-gch4g" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.699913 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdggz\" (UniqueName: \"kubernetes.io/projected/92e80556-5f2d-44ed-b165-3211fd50ad98-kube-api-access-gdggz\") pod \"ovn-controller-metrics-gch4g\" (UID: \"92e80556-5f2d-44ed-b165-3211fd50ad98\") " pod="openstack/ovn-controller-metrics-gch4g" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.699940 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92e80556-5f2d-44ed-b165-3211fd50ad98-config\") pod \"ovn-controller-metrics-gch4g\" (UID: \"92e80556-5f2d-44ed-b165-3211fd50ad98\") " pod="openstack/ovn-controller-metrics-gch4g" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.801476 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/92e80556-5f2d-44ed-b165-3211fd50ad98-ovn-rundir\") pod \"ovn-controller-metrics-gch4g\" (UID: \"92e80556-5f2d-44ed-b165-3211fd50ad98\") " pod="openstack/ovn-controller-metrics-gch4g" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.801553 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92e80556-5f2d-44ed-b165-3211fd50ad98-combined-ca-bundle\") pod \"ovn-controller-metrics-gch4g\" (UID: \"92e80556-5f2d-44ed-b165-3211fd50ad98\") " pod="openstack/ovn-controller-metrics-gch4g" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.801584 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/92e80556-5f2d-44ed-b165-3211fd50ad98-ovs-rundir\") pod \"ovn-controller-metrics-gch4g\" (UID: \"92e80556-5f2d-44ed-b165-3211fd50ad98\") " pod="openstack/ovn-controller-metrics-gch4g" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.801604 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/92e80556-5f2d-44ed-b165-3211fd50ad98-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-gch4g\" (UID: \"92e80556-5f2d-44ed-b165-3211fd50ad98\") " pod="openstack/ovn-controller-metrics-gch4g" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.801620 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdggz\" (UniqueName: \"kubernetes.io/projected/92e80556-5f2d-44ed-b165-3211fd50ad98-kube-api-access-gdggz\") pod \"ovn-controller-metrics-gch4g\" (UID: \"92e80556-5f2d-44ed-b165-3211fd50ad98\") " pod="openstack/ovn-controller-metrics-gch4g" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.801640 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92e80556-5f2d-44ed-b165-3211fd50ad98-config\") pod \"ovn-controller-metrics-gch4g\" (UID: \"92e80556-5f2d-44ed-b165-3211fd50ad98\") " pod="openstack/ovn-controller-metrics-gch4g" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.801678 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7pvn\" (UniqueName: \"kubernetes.io/projected/1fab5ec1-e7da-4926-945f-526762ff5aa6-kube-api-access-p7pvn\") pod \"dnsmasq-dns-7f896c8c65-9vttm\" (UID: \"1fab5ec1-e7da-4926-945f-526762ff5aa6\") " pod="openstack/dnsmasq-dns-7f896c8c65-9vttm" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.801697 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fab5ec1-e7da-4926-945f-526762ff5aa6-config\") pod \"dnsmasq-dns-7f896c8c65-9vttm\" (UID: \"1fab5ec1-e7da-4926-945f-526762ff5aa6\") " pod="openstack/dnsmasq-dns-7f896c8c65-9vttm" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.801745 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fab5ec1-e7da-4926-945f-526762ff5aa6-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-9vttm\" (UID: \"1fab5ec1-e7da-4926-945f-526762ff5aa6\") " pod="openstack/dnsmasq-dns-7f896c8c65-9vttm" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.801783 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fab5ec1-e7da-4926-945f-526762ff5aa6-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-9vttm\" (UID: \"1fab5ec1-e7da-4926-945f-526762ff5aa6\") " pod="openstack/dnsmasq-dns-7f896c8c65-9vttm" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.802844 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fab5ec1-e7da-4926-945f-526762ff5aa6-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-9vttm\" (UID: \"1fab5ec1-e7da-4926-945f-526762ff5aa6\") " pod="openstack/dnsmasq-dns-7f896c8c65-9vttm" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.802890 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/92e80556-5f2d-44ed-b165-3211fd50ad98-ovn-rundir\") pod \"ovn-controller-metrics-gch4g\" (UID: \"92e80556-5f2d-44ed-b165-3211fd50ad98\") " pod="openstack/ovn-controller-metrics-gch4g" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.802893 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/92e80556-5f2d-44ed-b165-3211fd50ad98-ovs-rundir\") pod \"ovn-controller-metrics-gch4g\" (UID: \"92e80556-5f2d-44ed-b165-3211fd50ad98\") " pod="openstack/ovn-controller-metrics-gch4g" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.804742 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92e80556-5f2d-44ed-b165-3211fd50ad98-config\") pod \"ovn-controller-metrics-gch4g\" (UID: \"92e80556-5f2d-44ed-b165-3211fd50ad98\") " pod="openstack/ovn-controller-metrics-gch4g" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.807336 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fab5ec1-e7da-4926-945f-526762ff5aa6-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-9vttm\" (UID: \"1fab5ec1-e7da-4926-945f-526762ff5aa6\") " pod="openstack/dnsmasq-dns-7f896c8c65-9vttm" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.807439 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fab5ec1-e7da-4926-945f-526762ff5aa6-config\") pod \"dnsmasq-dns-7f896c8c65-9vttm\" (UID: \"1fab5ec1-e7da-4926-945f-526762ff5aa6\") " pod="openstack/dnsmasq-dns-7f896c8c65-9vttm" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.808224 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92e80556-5f2d-44ed-b165-3211fd50ad98-combined-ca-bundle\") pod \"ovn-controller-metrics-gch4g\" (UID: \"92e80556-5f2d-44ed-b165-3211fd50ad98\") " pod="openstack/ovn-controller-metrics-gch4g" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.808793 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/92e80556-5f2d-44ed-b165-3211fd50ad98-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-gch4g\" (UID: \"92e80556-5f2d-44ed-b165-3211fd50ad98\") " pod="openstack/ovn-controller-metrics-gch4g" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.823604 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7pvn\" (UniqueName: \"kubernetes.io/projected/1fab5ec1-e7da-4926-945f-526762ff5aa6-kube-api-access-p7pvn\") pod \"dnsmasq-dns-7f896c8c65-9vttm\" (UID: \"1fab5ec1-e7da-4926-945f-526762ff5aa6\") " pod="openstack/dnsmasq-dns-7f896c8c65-9vttm" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.828004 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdggz\" (UniqueName: \"kubernetes.io/projected/92e80556-5f2d-44ed-b165-3211fd50ad98-kube-api-access-gdggz\") pod \"ovn-controller-metrics-gch4g\" (UID: \"92e80556-5f2d-44ed-b165-3211fd50ad98\") " pod="openstack/ovn-controller-metrics-gch4g" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.853913 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-9vttm"] Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.854649 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-9vttm" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.879380 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-dlzsv"] Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.880590 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-dlzsv" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.888068 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.900673 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-dlzsv"] Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.906570 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92d6858f-97af-4243-98e9-9322238ca042-config\") pod \"dnsmasq-dns-86db49b7ff-dlzsv\" (UID: \"92d6858f-97af-4243-98e9-9322238ca042\") " pod="openstack/dnsmasq-dns-86db49b7ff-dlzsv" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.906635 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7xhk\" (UniqueName: \"kubernetes.io/projected/92d6858f-97af-4243-98e9-9322238ca042-kube-api-access-p7xhk\") pod \"dnsmasq-dns-86db49b7ff-dlzsv\" (UID: \"92d6858f-97af-4243-98e9-9322238ca042\") " pod="openstack/dnsmasq-dns-86db49b7ff-dlzsv" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.906667 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92d6858f-97af-4243-98e9-9322238ca042-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-dlzsv\" (UID: \"92d6858f-97af-4243-98e9-9322238ca042\") " pod="openstack/dnsmasq-dns-86db49b7ff-dlzsv" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.906763 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92d6858f-97af-4243-98e9-9322238ca042-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-dlzsv\" (UID: \"92d6858f-97af-4243-98e9-9322238ca042\") " pod="openstack/dnsmasq-dns-86db49b7ff-dlzsv" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.906801 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92d6858f-97af-4243-98e9-9322238ca042-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-dlzsv\" (UID: \"92d6858f-97af-4243-98e9-9322238ca042\") " pod="openstack/dnsmasq-dns-86db49b7ff-dlzsv" Dec 05 01:30:59 crc kubenswrapper[4990]: I1205 01:30:59.973432 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7rldr" Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.003059 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-gch4g" Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.011289 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92d6858f-97af-4243-98e9-9322238ca042-config\") pod \"dnsmasq-dns-86db49b7ff-dlzsv\" (UID: \"92d6858f-97af-4243-98e9-9322238ca042\") " pod="openstack/dnsmasq-dns-86db49b7ff-dlzsv" Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.011339 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7xhk\" (UniqueName: \"kubernetes.io/projected/92d6858f-97af-4243-98e9-9322238ca042-kube-api-access-p7xhk\") pod \"dnsmasq-dns-86db49b7ff-dlzsv\" (UID: \"92d6858f-97af-4243-98e9-9322238ca042\") " pod="openstack/dnsmasq-dns-86db49b7ff-dlzsv" Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.011367 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92d6858f-97af-4243-98e9-9322238ca042-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-dlzsv\" (UID: \"92d6858f-97af-4243-98e9-9322238ca042\") " pod="openstack/dnsmasq-dns-86db49b7ff-dlzsv" Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.011415 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92d6858f-97af-4243-98e9-9322238ca042-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-dlzsv\" (UID: \"92d6858f-97af-4243-98e9-9322238ca042\") " pod="openstack/dnsmasq-dns-86db49b7ff-dlzsv" Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.011443 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92d6858f-97af-4243-98e9-9322238ca042-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-dlzsv\" (UID: \"92d6858f-97af-4243-98e9-9322238ca042\") " pod="openstack/dnsmasq-dns-86db49b7ff-dlzsv" Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.012299 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92d6858f-97af-4243-98e9-9322238ca042-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-dlzsv\" (UID: \"92d6858f-97af-4243-98e9-9322238ca042\") " pod="openstack/dnsmasq-dns-86db49b7ff-dlzsv" Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.012862 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92d6858f-97af-4243-98e9-9322238ca042-config\") pod \"dnsmasq-dns-86db49b7ff-dlzsv\" (UID: \"92d6858f-97af-4243-98e9-9322238ca042\") " pod="openstack/dnsmasq-dns-86db49b7ff-dlzsv" Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.013606 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92d6858f-97af-4243-98e9-9322238ca042-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-dlzsv\" (UID: \"92d6858f-97af-4243-98e9-9322238ca042\") " pod="openstack/dnsmasq-dns-86db49b7ff-dlzsv" Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.013999 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92d6858f-97af-4243-98e9-9322238ca042-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-dlzsv\" (UID: \"92d6858f-97af-4243-98e9-9322238ca042\") " pod="openstack/dnsmasq-dns-86db49b7ff-dlzsv" Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.033851 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7xhk\" (UniqueName: \"kubernetes.io/projected/92d6858f-97af-4243-98e9-9322238ca042-kube-api-access-p7xhk\") pod \"dnsmasq-dns-86db49b7ff-dlzsv\" (UID: \"92d6858f-97af-4243-98e9-9322238ca042\") " pod="openstack/dnsmasq-dns-86db49b7ff-dlzsv" Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.081645 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.112674 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d858e3af-1688-433c-ad50-09aef898edda-dns-svc\") pod \"d858e3af-1688-433c-ad50-09aef898edda\" (UID: \"d858e3af-1688-433c-ad50-09aef898edda\") " Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.112836 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjljs\" (UniqueName: \"kubernetes.io/projected/d858e3af-1688-433c-ad50-09aef898edda-kube-api-access-gjljs\") pod \"d858e3af-1688-433c-ad50-09aef898edda\" (UID: \"d858e3af-1688-433c-ad50-09aef898edda\") " Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.112895 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d858e3af-1688-433c-ad50-09aef898edda-config\") pod \"d858e3af-1688-433c-ad50-09aef898edda\" (UID: \"d858e3af-1688-433c-ad50-09aef898edda\") " Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.116126 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d858e3af-1688-433c-ad50-09aef898edda-kube-api-access-gjljs" (OuterVolumeSpecName: "kube-api-access-gjljs") pod "d858e3af-1688-433c-ad50-09aef898edda" (UID: "d858e3af-1688-433c-ad50-09aef898edda"). InnerVolumeSpecName "kube-api-access-gjljs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.148989 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d858e3af-1688-433c-ad50-09aef898edda-config" (OuterVolumeSpecName: "config") pod "d858e3af-1688-433c-ad50-09aef898edda" (UID: "d858e3af-1688-433c-ad50-09aef898edda"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.149909 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d858e3af-1688-433c-ad50-09aef898edda-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d858e3af-1688-433c-ad50-09aef898edda" (UID: "d858e3af-1688-433c-ad50-09aef898edda"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.214651 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjljs\" (UniqueName: \"kubernetes.io/projected/d858e3af-1688-433c-ad50-09aef898edda-kube-api-access-gjljs\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.214674 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d858e3af-1688-433c-ad50-09aef898edda-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.214684 4990 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d858e3af-1688-433c-ad50-09aef898edda-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.237554 4990 generic.go:334] "Generic (PLEG): container finished" podID="d858e3af-1688-433c-ad50-09aef898edda" containerID="c4b92a353200df6147ebf085b053f8ff1d30eaca827b511ab86b33a42bce83cf" exitCode=0 Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.238226 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7rldr" event={"ID":"d858e3af-1688-433c-ad50-09aef898edda","Type":"ContainerDied","Data":"c4b92a353200df6147ebf085b053f8ff1d30eaca827b511ab86b33a42bce83cf"} Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.238270 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7rldr" event={"ID":"d858e3af-1688-433c-ad50-09aef898edda","Type":"ContainerDied","Data":"f37fcc88a7dc384d40166777177b274125cda284fe438002f822ef5abc4d83cd"} Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.238288 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7rldr" Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.238341 4990 scope.go:117] "RemoveContainer" containerID="c4b92a353200df6147ebf085b053f8ff1d30eaca827b511ab86b33a42bce83cf" Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.267061 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-dlzsv" Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.294769 4990 scope.go:117] "RemoveContainer" containerID="8be3e28f3162c551353b7a0b7e95263f298da1bd646be5e870b2780bd7bebd77" Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.348415 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7rldr"] Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.366071 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7rldr"] Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.390268 4990 scope.go:117] "RemoveContainer" containerID="c4b92a353200df6147ebf085b053f8ff1d30eaca827b511ab86b33a42bce83cf" Dec 05 01:31:00 crc kubenswrapper[4990]: E1205 01:31:00.396199 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4b92a353200df6147ebf085b053f8ff1d30eaca827b511ab86b33a42bce83cf\": container with ID starting with c4b92a353200df6147ebf085b053f8ff1d30eaca827b511ab86b33a42bce83cf not found: ID does not exist" containerID="c4b92a353200df6147ebf085b053f8ff1d30eaca827b511ab86b33a42bce83cf" Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.396429 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4b92a353200df6147ebf085b053f8ff1d30eaca827b511ab86b33a42bce83cf"} err="failed to get container status \"c4b92a353200df6147ebf085b053f8ff1d30eaca827b511ab86b33a42bce83cf\": rpc error: code = NotFound desc = could not find container \"c4b92a353200df6147ebf085b053f8ff1d30eaca827b511ab86b33a42bce83cf\": container with ID starting with c4b92a353200df6147ebf085b053f8ff1d30eaca827b511ab86b33a42bce83cf not found: ID does not exist" Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.396580 4990 scope.go:117] "RemoveContainer" containerID="8be3e28f3162c551353b7a0b7e95263f298da1bd646be5e870b2780bd7bebd77" Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.396597 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-9vttm"] Dec 05 01:31:00 crc kubenswrapper[4990]: E1205 01:31:00.397121 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8be3e28f3162c551353b7a0b7e95263f298da1bd646be5e870b2780bd7bebd77\": container with ID starting with 8be3e28f3162c551353b7a0b7e95263f298da1bd646be5e870b2780bd7bebd77 not found: ID does not exist" containerID="8be3e28f3162c551353b7a0b7e95263f298da1bd646be5e870b2780bd7bebd77" Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.397167 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8be3e28f3162c551353b7a0b7e95263f298da1bd646be5e870b2780bd7bebd77"} err="failed to get container status \"8be3e28f3162c551353b7a0b7e95263f298da1bd646be5e870b2780bd7bebd77\": rpc error: code = NotFound desc = could not find container \"8be3e28f3162c551353b7a0b7e95263f298da1bd646be5e870b2780bd7bebd77\": container with ID starting with 8be3e28f3162c551353b7a0b7e95263f298da1bd646be5e870b2780bd7bebd77 not found: ID does not exist" Dec 05 01:31:00 crc kubenswrapper[4990]: W1205 01:31:00.403394 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fab5ec1_e7da_4926_945f_526762ff5aa6.slice/crio-a1ab25ccb1a3a9d8857948412f00c48f7440b730e733a16419c644fe946b2a14 WatchSource:0}: Error finding container a1ab25ccb1a3a9d8857948412f00c48f7440b730e733a16419c644fe946b2a14: Status 404 returned error can't find the container with id a1ab25ccb1a3a9d8857948412f00c48f7440b730e733a16419c644fe946b2a14 Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.446071 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-gch4g"] Dec 05 01:31:00 crc kubenswrapper[4990]: I1205 01:31:00.746924 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-dlzsv"] Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.080976 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.145864 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.253988 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-dlzsv" event={"ID":"92d6858f-97af-4243-98e9-9322238ca042","Type":"ContainerStarted","Data":"daadd366def0e5e06427e8be39162a4c5501f368c9e441d86f433143b18e487a"} Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.258065 4990 generic.go:334] "Generic (PLEG): container finished" podID="1fab5ec1-e7da-4926-945f-526762ff5aa6" containerID="4ada56f07ceaf14c08d537d78919fd0d3359af0c6fea588884097bbeddb7ce81" exitCode=0 Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.258129 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-9vttm" event={"ID":"1fab5ec1-e7da-4926-945f-526762ff5aa6","Type":"ContainerDied","Data":"4ada56f07ceaf14c08d537d78919fd0d3359af0c6fea588884097bbeddb7ce81"} Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.258153 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-9vttm" event={"ID":"1fab5ec1-e7da-4926-945f-526762ff5aa6","Type":"ContainerStarted","Data":"a1ab25ccb1a3a9d8857948412f00c48f7440b730e733a16419c644fe946b2a14"} Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.262204 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-gch4g" event={"ID":"92e80556-5f2d-44ed-b165-3211fd50ad98","Type":"ContainerStarted","Data":"a7a11082a66fecab87b3288c6d11916a97e6105c2ce7e93782df7d9596c3b900"} Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.311440 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.451358 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 05 01:31:01 crc kubenswrapper[4990]: E1205 01:31:01.451770 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d858e3af-1688-433c-ad50-09aef898edda" containerName="dnsmasq-dns" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.451795 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="d858e3af-1688-433c-ad50-09aef898edda" containerName="dnsmasq-dns" Dec 05 01:31:01 crc kubenswrapper[4990]: E1205 01:31:01.451805 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d858e3af-1688-433c-ad50-09aef898edda" containerName="init" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.451815 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="d858e3af-1688-433c-ad50-09aef898edda" containerName="init" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.452002 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="d858e3af-1688-433c-ad50-09aef898edda" containerName="dnsmasq-dns" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.452976 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.457662 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.458051 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-v855j" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.467412 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.467777 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.469095 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.525693 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-9vttm" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.544633 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-scripts\") pod \"ovn-northd-0\" (UID: \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\") " pod="openstack/ovn-northd-0" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.544676 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt7xs\" (UniqueName: \"kubernetes.io/projected/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-kube-api-access-lt7xs\") pod \"ovn-northd-0\" (UID: \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\") " pod="openstack/ovn-northd-0" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.544695 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\") " pod="openstack/ovn-northd-0" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.544738 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\") " pod="openstack/ovn-northd-0" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.545048 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\") " pod="openstack/ovn-northd-0" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.545079 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\") " pod="openstack/ovn-northd-0" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.545106 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-config\") pod \"ovn-northd-0\" (UID: \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\") " pod="openstack/ovn-northd-0" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.646520 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fab5ec1-e7da-4926-945f-526762ff5aa6-dns-svc\") pod \"1fab5ec1-e7da-4926-945f-526762ff5aa6\" (UID: \"1fab5ec1-e7da-4926-945f-526762ff5aa6\") " Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.646641 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fab5ec1-e7da-4926-945f-526762ff5aa6-config\") pod \"1fab5ec1-e7da-4926-945f-526762ff5aa6\" (UID: \"1fab5ec1-e7da-4926-945f-526762ff5aa6\") " Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.646683 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fab5ec1-e7da-4926-945f-526762ff5aa6-ovsdbserver-sb\") pod \"1fab5ec1-e7da-4926-945f-526762ff5aa6\" (UID: \"1fab5ec1-e7da-4926-945f-526762ff5aa6\") " Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.646720 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7pvn\" (UniqueName: \"kubernetes.io/projected/1fab5ec1-e7da-4926-945f-526762ff5aa6-kube-api-access-p7pvn\") pod \"1fab5ec1-e7da-4926-945f-526762ff5aa6\" (UID: \"1fab5ec1-e7da-4926-945f-526762ff5aa6\") " Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.647188 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\") " pod="openstack/ovn-northd-0" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.648123 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\") " pod="openstack/ovn-northd-0" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.648197 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\") " pod="openstack/ovn-northd-0" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.648251 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-config\") pod \"ovn-northd-0\" (UID: \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\") " pod="openstack/ovn-northd-0" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.648425 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-scripts\") pod \"ovn-northd-0\" (UID: \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\") " pod="openstack/ovn-northd-0" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.648464 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt7xs\" (UniqueName: \"kubernetes.io/projected/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-kube-api-access-lt7xs\") pod \"ovn-northd-0\" (UID: \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\") " pod="openstack/ovn-northd-0" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.648509 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\") " pod="openstack/ovn-northd-0" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.649598 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\") " pod="openstack/ovn-northd-0" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.650393 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-scripts\") pod \"ovn-northd-0\" (UID: \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\") " pod="openstack/ovn-northd-0" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.651308 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fab5ec1-e7da-4926-945f-526762ff5aa6-kube-api-access-p7pvn" (OuterVolumeSpecName: "kube-api-access-p7pvn") pod "1fab5ec1-e7da-4926-945f-526762ff5aa6" (UID: "1fab5ec1-e7da-4926-945f-526762ff5aa6"). InnerVolumeSpecName "kube-api-access-p7pvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.651583 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-config\") pod \"ovn-northd-0\" (UID: \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\") " pod="openstack/ovn-northd-0" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.652290 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\") " pod="openstack/ovn-northd-0" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.654561 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\") " pod="openstack/ovn-northd-0" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.656341 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\") " pod="openstack/ovn-northd-0" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.665304 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt7xs\" (UniqueName: \"kubernetes.io/projected/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-kube-api-access-lt7xs\") pod \"ovn-northd-0\" (UID: \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\") " pod="openstack/ovn-northd-0" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.668230 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fab5ec1-e7da-4926-945f-526762ff5aa6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1fab5ec1-e7da-4926-945f-526762ff5aa6" (UID: "1fab5ec1-e7da-4926-945f-526762ff5aa6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.668425 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fab5ec1-e7da-4926-945f-526762ff5aa6-config" (OuterVolumeSpecName: "config") pod "1fab5ec1-e7da-4926-945f-526762ff5aa6" (UID: "1fab5ec1-e7da-4926-945f-526762ff5aa6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.677233 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fab5ec1-e7da-4926-945f-526762ff5aa6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1fab5ec1-e7da-4926-945f-526762ff5aa6" (UID: "1fab5ec1-e7da-4926-945f-526762ff5aa6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.750369 4990 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fab5ec1-e7da-4926-945f-526762ff5aa6-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.750400 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fab5ec1-e7da-4926-945f-526762ff5aa6-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.750411 4990 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fab5ec1-e7da-4926-945f-526762ff5aa6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.750422 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7pvn\" (UniqueName: \"kubernetes.io/projected/1fab5ec1-e7da-4926-945f-526762ff5aa6-kube-api-access-p7pvn\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.785609 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 05 01:31:01 crc kubenswrapper[4990]: I1205 01:31:01.943111 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d858e3af-1688-433c-ad50-09aef898edda" path="/var/lib/kubelet/pods/d858e3af-1688-433c-ad50-09aef898edda/volumes" Dec 05 01:31:02 crc kubenswrapper[4990]: I1205 01:31:02.078551 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 05 01:31:02 crc kubenswrapper[4990]: I1205 01:31:02.078640 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 05 01:31:02 crc kubenswrapper[4990]: I1205 01:31:02.238597 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 05 01:31:02 crc kubenswrapper[4990]: I1205 01:31:02.272548 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-9vttm" event={"ID":"1fab5ec1-e7da-4926-945f-526762ff5aa6","Type":"ContainerDied","Data":"a1ab25ccb1a3a9d8857948412f00c48f7440b730e733a16419c644fe946b2a14"} Dec 05 01:31:02 crc kubenswrapper[4990]: I1205 01:31:02.272633 4990 scope.go:117] "RemoveContainer" containerID="4ada56f07ceaf14c08d537d78919fd0d3359af0c6fea588884097bbeddb7ce81" Dec 05 01:31:02 crc kubenswrapper[4990]: I1205 01:31:02.272874 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-9vttm" Dec 05 01:31:02 crc kubenswrapper[4990]: I1205 01:31:02.274440 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454","Type":"ContainerStarted","Data":"94f2c6137a1248cd5c311d43aa64ecf777baa66c13d77a820ff3f8866c28f34e"} Dec 05 01:31:02 crc kubenswrapper[4990]: I1205 01:31:02.343188 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-9vttm"] Dec 05 01:31:02 crc kubenswrapper[4990]: I1205 01:31:02.354804 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-9vttm"] Dec 05 01:31:03 crc kubenswrapper[4990]: I1205 01:31:03.284585 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-gch4g" event={"ID":"92e80556-5f2d-44ed-b165-3211fd50ad98","Type":"ContainerStarted","Data":"c2e8be780f912a39bffb8d62d08b624b97b1904a55d0be61a24accc3f6874ef6"} Dec 05 01:31:03 crc kubenswrapper[4990]: I1205 01:31:03.478653 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 05 01:31:03 crc kubenswrapper[4990]: I1205 01:31:03.478701 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 05 01:31:03 crc kubenswrapper[4990]: I1205 01:31:03.601612 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 05 01:31:03 crc kubenswrapper[4990]: I1205 01:31:03.951514 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fab5ec1-e7da-4926-945f-526762ff5aa6" path="/var/lib/kubelet/pods/1fab5ec1-e7da-4926-945f-526762ff5aa6/volumes" Dec 05 01:31:04 crc kubenswrapper[4990]: I1205 01:31:04.386228 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 05 01:31:04 crc kubenswrapper[4990]: I1205 01:31:04.825990 4990 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-7rldr" podUID="d858e3af-1688-433c-ad50-09aef898edda" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.99:5353: i/o timeout" Dec 05 01:31:05 crc kubenswrapper[4990]: I1205 01:31:05.813950 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-dlzsv"] Dec 05 01:31:05 crc kubenswrapper[4990]: I1205 01:31:05.849636 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 05 01:31:05 crc kubenswrapper[4990]: I1205 01:31:05.867612 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-gml76"] Dec 05 01:31:05 crc kubenswrapper[4990]: E1205 01:31:05.867976 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fab5ec1-e7da-4926-945f-526762ff5aa6" containerName="init" Dec 05 01:31:05 crc kubenswrapper[4990]: I1205 01:31:05.867999 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fab5ec1-e7da-4926-945f-526762ff5aa6" containerName="init" Dec 05 01:31:05 crc kubenswrapper[4990]: I1205 01:31:05.868200 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fab5ec1-e7da-4926-945f-526762ff5aa6" containerName="init" Dec 05 01:31:05 crc kubenswrapper[4990]: I1205 01:31:05.869161 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-gml76" Dec 05 01:31:05 crc kubenswrapper[4990]: I1205 01:31:05.894831 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-gml76"] Dec 05 01:31:05 crc kubenswrapper[4990]: I1205 01:31:05.920536 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/513896a8-02b3-417f-95a4-7ec45b07b61a-dns-svc\") pod \"dnsmasq-dns-698758b865-gml76\" (UID: \"513896a8-02b3-417f-95a4-7ec45b07b61a\") " pod="openstack/dnsmasq-dns-698758b865-gml76" Dec 05 01:31:05 crc kubenswrapper[4990]: I1205 01:31:05.920577 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/513896a8-02b3-417f-95a4-7ec45b07b61a-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-gml76\" (UID: \"513896a8-02b3-417f-95a4-7ec45b07b61a\") " pod="openstack/dnsmasq-dns-698758b865-gml76" Dec 05 01:31:05 crc kubenswrapper[4990]: I1205 01:31:05.920639 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7nct\" (UniqueName: \"kubernetes.io/projected/513896a8-02b3-417f-95a4-7ec45b07b61a-kube-api-access-d7nct\") pod \"dnsmasq-dns-698758b865-gml76\" (UID: \"513896a8-02b3-417f-95a4-7ec45b07b61a\") " pod="openstack/dnsmasq-dns-698758b865-gml76" Dec 05 01:31:05 crc kubenswrapper[4990]: I1205 01:31:05.920656 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/513896a8-02b3-417f-95a4-7ec45b07b61a-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-gml76\" (UID: \"513896a8-02b3-417f-95a4-7ec45b07b61a\") " pod="openstack/dnsmasq-dns-698758b865-gml76" Dec 05 01:31:05 crc kubenswrapper[4990]: I1205 01:31:05.920678 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/513896a8-02b3-417f-95a4-7ec45b07b61a-config\") pod \"dnsmasq-dns-698758b865-gml76\" (UID: \"513896a8-02b3-417f-95a4-7ec45b07b61a\") " pod="openstack/dnsmasq-dns-698758b865-gml76" Dec 05 01:31:06 crc kubenswrapper[4990]: I1205 01:31:06.021813 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/513896a8-02b3-417f-95a4-7ec45b07b61a-dns-svc\") pod \"dnsmasq-dns-698758b865-gml76\" (UID: \"513896a8-02b3-417f-95a4-7ec45b07b61a\") " pod="openstack/dnsmasq-dns-698758b865-gml76" Dec 05 01:31:06 crc kubenswrapper[4990]: I1205 01:31:06.021856 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/513896a8-02b3-417f-95a4-7ec45b07b61a-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-gml76\" (UID: \"513896a8-02b3-417f-95a4-7ec45b07b61a\") " pod="openstack/dnsmasq-dns-698758b865-gml76" Dec 05 01:31:06 crc kubenswrapper[4990]: I1205 01:31:06.021944 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7nct\" (UniqueName: \"kubernetes.io/projected/513896a8-02b3-417f-95a4-7ec45b07b61a-kube-api-access-d7nct\") pod \"dnsmasq-dns-698758b865-gml76\" (UID: \"513896a8-02b3-417f-95a4-7ec45b07b61a\") " pod="openstack/dnsmasq-dns-698758b865-gml76" Dec 05 01:31:06 crc kubenswrapper[4990]: I1205 01:31:06.021961 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/513896a8-02b3-417f-95a4-7ec45b07b61a-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-gml76\" (UID: \"513896a8-02b3-417f-95a4-7ec45b07b61a\") " pod="openstack/dnsmasq-dns-698758b865-gml76" Dec 05 01:31:06 crc kubenswrapper[4990]: I1205 01:31:06.021984 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/513896a8-02b3-417f-95a4-7ec45b07b61a-config\") pod \"dnsmasq-dns-698758b865-gml76\" (UID: \"513896a8-02b3-417f-95a4-7ec45b07b61a\") " pod="openstack/dnsmasq-dns-698758b865-gml76" Dec 05 01:31:06 crc kubenswrapper[4990]: I1205 01:31:06.022665 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/513896a8-02b3-417f-95a4-7ec45b07b61a-dns-svc\") pod \"dnsmasq-dns-698758b865-gml76\" (UID: \"513896a8-02b3-417f-95a4-7ec45b07b61a\") " pod="openstack/dnsmasq-dns-698758b865-gml76" Dec 05 01:31:06 crc kubenswrapper[4990]: I1205 01:31:06.022807 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/513896a8-02b3-417f-95a4-7ec45b07b61a-config\") pod \"dnsmasq-dns-698758b865-gml76\" (UID: \"513896a8-02b3-417f-95a4-7ec45b07b61a\") " pod="openstack/dnsmasq-dns-698758b865-gml76" Dec 05 01:31:06 crc kubenswrapper[4990]: I1205 01:31:06.022911 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/513896a8-02b3-417f-95a4-7ec45b07b61a-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-gml76\" (UID: \"513896a8-02b3-417f-95a4-7ec45b07b61a\") " pod="openstack/dnsmasq-dns-698758b865-gml76" Dec 05 01:31:06 crc kubenswrapper[4990]: I1205 01:31:06.023237 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/513896a8-02b3-417f-95a4-7ec45b07b61a-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-gml76\" (UID: \"513896a8-02b3-417f-95a4-7ec45b07b61a\") " pod="openstack/dnsmasq-dns-698758b865-gml76" Dec 05 01:31:06 crc kubenswrapper[4990]: I1205 01:31:06.040433 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7nct\" (UniqueName: \"kubernetes.io/projected/513896a8-02b3-417f-95a4-7ec45b07b61a-kube-api-access-d7nct\") pod \"dnsmasq-dns-698758b865-gml76\" (UID: \"513896a8-02b3-417f-95a4-7ec45b07b61a\") " pod="openstack/dnsmasq-dns-698758b865-gml76" Dec 05 01:31:06 crc kubenswrapper[4990]: I1205 01:31:06.184604 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-gml76" Dec 05 01:31:06 crc kubenswrapper[4990]: I1205 01:31:06.664787 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-gml76"] Dec 05 01:31:06 crc kubenswrapper[4990]: I1205 01:31:06.948717 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 05 01:31:06 crc kubenswrapper[4990]: I1205 01:31:06.956288 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 05 01:31:06 crc kubenswrapper[4990]: I1205 01:31:06.958633 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 05 01:31:06 crc kubenswrapper[4990]: I1205 01:31:06.959340 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 05 01:31:06 crc kubenswrapper[4990]: I1205 01:31:06.959408 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-9nj2l" Dec 05 01:31:06 crc kubenswrapper[4990]: I1205 01:31:06.960216 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 05 01:31:06 crc kubenswrapper[4990]: I1205 01:31:06.979993 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.139988 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bj89\" (UniqueName: \"kubernetes.io/projected/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-kube-api-access-8bj89\") pod \"swift-storage-0\" (UID: \"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3\") " pod="openstack/swift-storage-0" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.140102 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-cache\") pod \"swift-storage-0\" (UID: \"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3\") " pod="openstack/swift-storage-0" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.140184 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-etc-swift\") pod \"swift-storage-0\" (UID: \"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3\") " pod="openstack/swift-storage-0" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.140257 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3\") " pod="openstack/swift-storage-0" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.140281 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-lock\") pod \"swift-storage-0\" (UID: \"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3\") " pod="openstack/swift-storage-0" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.242100 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3\") " pod="openstack/swift-storage-0" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.242500 4990 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/swift-storage-0" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.242533 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-lock\") pod \"swift-storage-0\" (UID: \"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3\") " pod="openstack/swift-storage-0" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.243004 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bj89\" (UniqueName: \"kubernetes.io/projected/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-kube-api-access-8bj89\") pod \"swift-storage-0\" (UID: \"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3\") " pod="openstack/swift-storage-0" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.243075 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-cache\") pod \"swift-storage-0\" (UID: \"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3\") " pod="openstack/swift-storage-0" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.243123 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-etc-swift\") pod \"swift-storage-0\" (UID: \"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3\") " pod="openstack/swift-storage-0" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.243263 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-lock\") pod \"swift-storage-0\" (UID: \"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3\") " pod="openstack/swift-storage-0" Dec 05 01:31:07 crc kubenswrapper[4990]: E1205 01:31:07.243336 4990 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 05 01:31:07 crc kubenswrapper[4990]: E1205 01:31:07.243359 4990 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 05 01:31:07 crc kubenswrapper[4990]: E1205 01:31:07.243411 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-etc-swift podName:c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3 nodeName:}" failed. No retries permitted until 2025-12-05 01:31:07.743391264 +0000 UTC m=+1366.119606635 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-etc-swift") pod "swift-storage-0" (UID: "c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3") : configmap "swift-ring-files" not found Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.243628 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-cache\") pod \"swift-storage-0\" (UID: \"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3\") " pod="openstack/swift-storage-0" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.267798 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bj89\" (UniqueName: \"kubernetes.io/projected/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-kube-api-access-8bj89\") pod \"swift-storage-0\" (UID: \"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3\") " pod="openstack/swift-storage-0" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.281700 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3\") " pod="openstack/swift-storage-0" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.334547 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-dlzsv" event={"ID":"92d6858f-97af-4243-98e9-9322238ca042","Type":"ContainerStarted","Data":"158ae37d6f3982ba5d59cd75970abe6b8cd22dc9cc6fc60e81f8b4e34e9db6cb"} Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.335930 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-gml76" event={"ID":"513896a8-02b3-417f-95a4-7ec45b07b61a","Type":"ContainerStarted","Data":"bf3a21ac9a4285c6e5a0edfea71759ca0368019f2b9f8d256bdacb5ea26f3d7b"} Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.428277 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-snp9x"] Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.429912 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-snp9x" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.432121 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.432625 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.432784 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.445203 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-snp9x"] Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.447856 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59d6536f-e0ed-42d2-9676-40bc88de1473-combined-ca-bundle\") pod \"swift-ring-rebalance-snp9x\" (UID: \"59d6536f-e0ed-42d2-9676-40bc88de1473\") " pod="openstack/swift-ring-rebalance-snp9x" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.447962 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bd9k\" (UniqueName: \"kubernetes.io/projected/59d6536f-e0ed-42d2-9676-40bc88de1473-kube-api-access-7bd9k\") pod \"swift-ring-rebalance-snp9x\" (UID: \"59d6536f-e0ed-42d2-9676-40bc88de1473\") " pod="openstack/swift-ring-rebalance-snp9x" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.448035 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/59d6536f-e0ed-42d2-9676-40bc88de1473-dispersionconf\") pod \"swift-ring-rebalance-snp9x\" (UID: \"59d6536f-e0ed-42d2-9676-40bc88de1473\") " pod="openstack/swift-ring-rebalance-snp9x" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.448148 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/59d6536f-e0ed-42d2-9676-40bc88de1473-swiftconf\") pod \"swift-ring-rebalance-snp9x\" (UID: \"59d6536f-e0ed-42d2-9676-40bc88de1473\") " pod="openstack/swift-ring-rebalance-snp9x" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.448227 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/59d6536f-e0ed-42d2-9676-40bc88de1473-etc-swift\") pod \"swift-ring-rebalance-snp9x\" (UID: \"59d6536f-e0ed-42d2-9676-40bc88de1473\") " pod="openstack/swift-ring-rebalance-snp9x" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.448259 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/59d6536f-e0ed-42d2-9676-40bc88de1473-ring-data-devices\") pod \"swift-ring-rebalance-snp9x\" (UID: \"59d6536f-e0ed-42d2-9676-40bc88de1473\") " pod="openstack/swift-ring-rebalance-snp9x" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.448292 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/59d6536f-e0ed-42d2-9676-40bc88de1473-scripts\") pod \"swift-ring-rebalance-snp9x\" (UID: \"59d6536f-e0ed-42d2-9676-40bc88de1473\") " pod="openstack/swift-ring-rebalance-snp9x" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.549465 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/59d6536f-e0ed-42d2-9676-40bc88de1473-ring-data-devices\") pod \"swift-ring-rebalance-snp9x\" (UID: \"59d6536f-e0ed-42d2-9676-40bc88de1473\") " pod="openstack/swift-ring-rebalance-snp9x" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.549521 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/59d6536f-e0ed-42d2-9676-40bc88de1473-scripts\") pod \"swift-ring-rebalance-snp9x\" (UID: \"59d6536f-e0ed-42d2-9676-40bc88de1473\") " pod="openstack/swift-ring-rebalance-snp9x" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.549594 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59d6536f-e0ed-42d2-9676-40bc88de1473-combined-ca-bundle\") pod \"swift-ring-rebalance-snp9x\" (UID: \"59d6536f-e0ed-42d2-9676-40bc88de1473\") " pod="openstack/swift-ring-rebalance-snp9x" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.549621 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bd9k\" (UniqueName: \"kubernetes.io/projected/59d6536f-e0ed-42d2-9676-40bc88de1473-kube-api-access-7bd9k\") pod \"swift-ring-rebalance-snp9x\" (UID: \"59d6536f-e0ed-42d2-9676-40bc88de1473\") " pod="openstack/swift-ring-rebalance-snp9x" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.549646 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/59d6536f-e0ed-42d2-9676-40bc88de1473-dispersionconf\") pod \"swift-ring-rebalance-snp9x\" (UID: \"59d6536f-e0ed-42d2-9676-40bc88de1473\") " pod="openstack/swift-ring-rebalance-snp9x" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.549681 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/59d6536f-e0ed-42d2-9676-40bc88de1473-swiftconf\") pod \"swift-ring-rebalance-snp9x\" (UID: \"59d6536f-e0ed-42d2-9676-40bc88de1473\") " pod="openstack/swift-ring-rebalance-snp9x" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.549701 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/59d6536f-e0ed-42d2-9676-40bc88de1473-etc-swift\") pod \"swift-ring-rebalance-snp9x\" (UID: \"59d6536f-e0ed-42d2-9676-40bc88de1473\") " pod="openstack/swift-ring-rebalance-snp9x" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.550378 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/59d6536f-e0ed-42d2-9676-40bc88de1473-scripts\") pod \"swift-ring-rebalance-snp9x\" (UID: \"59d6536f-e0ed-42d2-9676-40bc88de1473\") " pod="openstack/swift-ring-rebalance-snp9x" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.551195 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/59d6536f-e0ed-42d2-9676-40bc88de1473-etc-swift\") pod \"swift-ring-rebalance-snp9x\" (UID: \"59d6536f-e0ed-42d2-9676-40bc88de1473\") " pod="openstack/swift-ring-rebalance-snp9x" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.553223 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/59d6536f-e0ed-42d2-9676-40bc88de1473-ring-data-devices\") pod \"swift-ring-rebalance-snp9x\" (UID: \"59d6536f-e0ed-42d2-9676-40bc88de1473\") " pod="openstack/swift-ring-rebalance-snp9x" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.553664 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/59d6536f-e0ed-42d2-9676-40bc88de1473-swiftconf\") pod \"swift-ring-rebalance-snp9x\" (UID: \"59d6536f-e0ed-42d2-9676-40bc88de1473\") " pod="openstack/swift-ring-rebalance-snp9x" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.554032 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59d6536f-e0ed-42d2-9676-40bc88de1473-combined-ca-bundle\") pod \"swift-ring-rebalance-snp9x\" (UID: \"59d6536f-e0ed-42d2-9676-40bc88de1473\") " pod="openstack/swift-ring-rebalance-snp9x" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.556393 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/59d6536f-e0ed-42d2-9676-40bc88de1473-dispersionconf\") pod \"swift-ring-rebalance-snp9x\" (UID: \"59d6536f-e0ed-42d2-9676-40bc88de1473\") " pod="openstack/swift-ring-rebalance-snp9x" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.571274 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bd9k\" (UniqueName: \"kubernetes.io/projected/59d6536f-e0ed-42d2-9676-40bc88de1473-kube-api-access-7bd9k\") pod \"swift-ring-rebalance-snp9x\" (UID: \"59d6536f-e0ed-42d2-9676-40bc88de1473\") " pod="openstack/swift-ring-rebalance-snp9x" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.750625 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-snp9x" Dec 05 01:31:07 crc kubenswrapper[4990]: I1205 01:31:07.753023 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-etc-swift\") pod \"swift-storage-0\" (UID: \"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3\") " pod="openstack/swift-storage-0" Dec 05 01:31:07 crc kubenswrapper[4990]: E1205 01:31:07.753296 4990 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 05 01:31:07 crc kubenswrapper[4990]: E1205 01:31:07.753319 4990 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 05 01:31:07 crc kubenswrapper[4990]: E1205 01:31:07.753363 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-etc-swift podName:c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3 nodeName:}" failed. No retries permitted until 2025-12-05 01:31:08.753347074 +0000 UTC m=+1367.129562445 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-etc-swift") pod "swift-storage-0" (UID: "c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3") : configmap "swift-ring-files" not found Dec 05 01:31:08 crc kubenswrapper[4990]: I1205 01:31:08.154323 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 05 01:31:08 crc kubenswrapper[4990]: I1205 01:31:08.226659 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 05 01:31:08 crc kubenswrapper[4990]: W1205 01:31:08.241899 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59d6536f_e0ed_42d2_9676_40bc88de1473.slice/crio-f82de9178fb93c4a32c6359d093ff65e3c56174c69098fdd95ce566603fbde19 WatchSource:0}: Error finding container f82de9178fb93c4a32c6359d093ff65e3c56174c69098fdd95ce566603fbde19: Status 404 returned error can't find the container with id f82de9178fb93c4a32c6359d093ff65e3c56174c69098fdd95ce566603fbde19 Dec 05 01:31:08 crc kubenswrapper[4990]: I1205 01:31:08.243423 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-snp9x"] Dec 05 01:31:08 crc kubenswrapper[4990]: I1205 01:31:08.344763 4990 generic.go:334] "Generic (PLEG): container finished" podID="92d6858f-97af-4243-98e9-9322238ca042" containerID="158ae37d6f3982ba5d59cd75970abe6b8cd22dc9cc6fc60e81f8b4e34e9db6cb" exitCode=0 Dec 05 01:31:08 crc kubenswrapper[4990]: I1205 01:31:08.344874 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-dlzsv" event={"ID":"92d6858f-97af-4243-98e9-9322238ca042","Type":"ContainerDied","Data":"158ae37d6f3982ba5d59cd75970abe6b8cd22dc9cc6fc60e81f8b4e34e9db6cb"} Dec 05 01:31:08 crc kubenswrapper[4990]: I1205 01:31:08.347608 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-snp9x" event={"ID":"59d6536f-e0ed-42d2-9676-40bc88de1473","Type":"ContainerStarted","Data":"f82de9178fb93c4a32c6359d093ff65e3c56174c69098fdd95ce566603fbde19"} Dec 05 01:31:08 crc kubenswrapper[4990]: I1205 01:31:08.351571 4990 generic.go:334] "Generic (PLEG): container finished" podID="513896a8-02b3-417f-95a4-7ec45b07b61a" containerID="3641fc7885d5143a0e7a9b14bb83558160a391d895ea61b0c6b5a6dfa852065b" exitCode=0 Dec 05 01:31:08 crc kubenswrapper[4990]: I1205 01:31:08.352425 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-gml76" event={"ID":"513896a8-02b3-417f-95a4-7ec45b07b61a","Type":"ContainerDied","Data":"3641fc7885d5143a0e7a9b14bb83558160a391d895ea61b0c6b5a6dfa852065b"} Dec 05 01:31:08 crc kubenswrapper[4990]: I1205 01:31:08.388375 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-gch4g" podStartSLOduration=9.388354821 podStartE2EDuration="9.388354821s" podCreationTimestamp="2025-12-05 01:30:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:31:08.386037356 +0000 UTC m=+1366.762252717" watchObservedRunningTime="2025-12-05 01:31:08.388354821 +0000 UTC m=+1366.764570182" Dec 05 01:31:08 crc kubenswrapper[4990]: I1205 01:31:08.647638 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-dlzsv" Dec 05 01:31:08 crc kubenswrapper[4990]: I1205 01:31:08.800207 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92d6858f-97af-4243-98e9-9322238ca042-dns-svc\") pod \"92d6858f-97af-4243-98e9-9322238ca042\" (UID: \"92d6858f-97af-4243-98e9-9322238ca042\") " Dec 05 01:31:08 crc kubenswrapper[4990]: I1205 01:31:08.800247 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92d6858f-97af-4243-98e9-9322238ca042-ovsdbserver-sb\") pod \"92d6858f-97af-4243-98e9-9322238ca042\" (UID: \"92d6858f-97af-4243-98e9-9322238ca042\") " Dec 05 01:31:08 crc kubenswrapper[4990]: I1205 01:31:08.800310 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92d6858f-97af-4243-98e9-9322238ca042-config\") pod \"92d6858f-97af-4243-98e9-9322238ca042\" (UID: \"92d6858f-97af-4243-98e9-9322238ca042\") " Dec 05 01:31:08 crc kubenswrapper[4990]: I1205 01:31:08.800614 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7xhk\" (UniqueName: \"kubernetes.io/projected/92d6858f-97af-4243-98e9-9322238ca042-kube-api-access-p7xhk\") pod \"92d6858f-97af-4243-98e9-9322238ca042\" (UID: \"92d6858f-97af-4243-98e9-9322238ca042\") " Dec 05 01:31:08 crc kubenswrapper[4990]: I1205 01:31:08.802947 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92d6858f-97af-4243-98e9-9322238ca042-ovsdbserver-nb\") pod \"92d6858f-97af-4243-98e9-9322238ca042\" (UID: \"92d6858f-97af-4243-98e9-9322238ca042\") " Dec 05 01:31:08 crc kubenswrapper[4990]: I1205 01:31:08.803351 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-etc-swift\") pod \"swift-storage-0\" (UID: \"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3\") " pod="openstack/swift-storage-0" Dec 05 01:31:08 crc kubenswrapper[4990]: E1205 01:31:08.803679 4990 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 05 01:31:08 crc kubenswrapper[4990]: E1205 01:31:08.803698 4990 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 05 01:31:08 crc kubenswrapper[4990]: E1205 01:31:08.803740 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-etc-swift podName:c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3 nodeName:}" failed. No retries permitted until 2025-12-05 01:31:10.803727277 +0000 UTC m=+1369.179942638 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-etc-swift") pod "swift-storage-0" (UID: "c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3") : configmap "swift-ring-files" not found Dec 05 01:31:08 crc kubenswrapper[4990]: I1205 01:31:08.805686 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92d6858f-97af-4243-98e9-9322238ca042-kube-api-access-p7xhk" (OuterVolumeSpecName: "kube-api-access-p7xhk") pod "92d6858f-97af-4243-98e9-9322238ca042" (UID: "92d6858f-97af-4243-98e9-9322238ca042"). InnerVolumeSpecName "kube-api-access-p7xhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:31:08 crc kubenswrapper[4990]: I1205 01:31:08.824251 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92d6858f-97af-4243-98e9-9322238ca042-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "92d6858f-97af-4243-98e9-9322238ca042" (UID: "92d6858f-97af-4243-98e9-9322238ca042"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:08 crc kubenswrapper[4990]: I1205 01:31:08.826667 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92d6858f-97af-4243-98e9-9322238ca042-config" (OuterVolumeSpecName: "config") pod "92d6858f-97af-4243-98e9-9322238ca042" (UID: "92d6858f-97af-4243-98e9-9322238ca042"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:08 crc kubenswrapper[4990]: I1205 01:31:08.827694 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92d6858f-97af-4243-98e9-9322238ca042-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "92d6858f-97af-4243-98e9-9322238ca042" (UID: "92d6858f-97af-4243-98e9-9322238ca042"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:08 crc kubenswrapper[4990]: I1205 01:31:08.834641 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92d6858f-97af-4243-98e9-9322238ca042-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "92d6858f-97af-4243-98e9-9322238ca042" (UID: "92d6858f-97af-4243-98e9-9322238ca042"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:08 crc kubenswrapper[4990]: I1205 01:31:08.905802 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92d6858f-97af-4243-98e9-9322238ca042-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:08 crc kubenswrapper[4990]: I1205 01:31:08.905844 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7xhk\" (UniqueName: \"kubernetes.io/projected/92d6858f-97af-4243-98e9-9322238ca042-kube-api-access-p7xhk\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:08 crc kubenswrapper[4990]: I1205 01:31:08.905860 4990 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92d6858f-97af-4243-98e9-9322238ca042-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:08 crc kubenswrapper[4990]: I1205 01:31:08.905873 4990 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92d6858f-97af-4243-98e9-9322238ca042-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:08 crc kubenswrapper[4990]: I1205 01:31:08.905885 4990 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92d6858f-97af-4243-98e9-9322238ca042-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:09 crc kubenswrapper[4990]: I1205 01:31:09.159919 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-bd6b-account-create-update-2mpnv"] Dec 05 01:31:09 crc kubenswrapper[4990]: E1205 01:31:09.160416 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92d6858f-97af-4243-98e9-9322238ca042" containerName="init" Dec 05 01:31:09 crc kubenswrapper[4990]: I1205 01:31:09.160451 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="92d6858f-97af-4243-98e9-9322238ca042" containerName="init" Dec 05 01:31:09 crc kubenswrapper[4990]: I1205 01:31:09.160703 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="92d6858f-97af-4243-98e9-9322238ca042" containerName="init" Dec 05 01:31:09 crc kubenswrapper[4990]: I1205 01:31:09.163722 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bd6b-account-create-update-2mpnv" Dec 05 01:31:09 crc kubenswrapper[4990]: I1205 01:31:09.166232 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 05 01:31:09 crc kubenswrapper[4990]: I1205 01:31:09.170668 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-rpg97"] Dec 05 01:31:09 crc kubenswrapper[4990]: I1205 01:31:09.172293 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rpg97" Dec 05 01:31:09 crc kubenswrapper[4990]: I1205 01:31:09.187017 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-rpg97"] Dec 05 01:31:09 crc kubenswrapper[4990]: I1205 01:31:09.198417 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-bd6b-account-create-update-2mpnv"] Dec 05 01:31:09 crc kubenswrapper[4990]: I1205 01:31:09.208896 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc4hz\" (UniqueName: \"kubernetes.io/projected/31b24fcb-1091-4e53-95d0-b1133f1a9b92-kube-api-access-bc4hz\") pod \"glance-bd6b-account-create-update-2mpnv\" (UID: \"31b24fcb-1091-4e53-95d0-b1133f1a9b92\") " pod="openstack/glance-bd6b-account-create-update-2mpnv" Dec 05 01:31:09 crc kubenswrapper[4990]: I1205 01:31:09.208989 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31b24fcb-1091-4e53-95d0-b1133f1a9b92-operator-scripts\") pod \"glance-bd6b-account-create-update-2mpnv\" (UID: \"31b24fcb-1091-4e53-95d0-b1133f1a9b92\") " pod="openstack/glance-bd6b-account-create-update-2mpnv" Dec 05 01:31:09 crc kubenswrapper[4990]: I1205 01:31:09.209031 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmvq7\" (UniqueName: \"kubernetes.io/projected/f690712b-647d-455f-af3b-adbe25e2662d-kube-api-access-vmvq7\") pod \"glance-db-create-rpg97\" (UID: \"f690712b-647d-455f-af3b-adbe25e2662d\") " pod="openstack/glance-db-create-rpg97" Dec 05 01:31:09 crc kubenswrapper[4990]: I1205 01:31:09.209086 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f690712b-647d-455f-af3b-adbe25e2662d-operator-scripts\") pod \"glance-db-create-rpg97\" (UID: \"f690712b-647d-455f-af3b-adbe25e2662d\") " pod="openstack/glance-db-create-rpg97" Dec 05 01:31:09 crc kubenswrapper[4990]: I1205 01:31:09.310163 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f690712b-647d-455f-af3b-adbe25e2662d-operator-scripts\") pod \"glance-db-create-rpg97\" (UID: \"f690712b-647d-455f-af3b-adbe25e2662d\") " pod="openstack/glance-db-create-rpg97" Dec 05 01:31:09 crc kubenswrapper[4990]: I1205 01:31:09.310590 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc4hz\" (UniqueName: \"kubernetes.io/projected/31b24fcb-1091-4e53-95d0-b1133f1a9b92-kube-api-access-bc4hz\") pod \"glance-bd6b-account-create-update-2mpnv\" (UID: \"31b24fcb-1091-4e53-95d0-b1133f1a9b92\") " pod="openstack/glance-bd6b-account-create-update-2mpnv" Dec 05 01:31:09 crc kubenswrapper[4990]: I1205 01:31:09.310634 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31b24fcb-1091-4e53-95d0-b1133f1a9b92-operator-scripts\") pod \"glance-bd6b-account-create-update-2mpnv\" (UID: \"31b24fcb-1091-4e53-95d0-b1133f1a9b92\") " pod="openstack/glance-bd6b-account-create-update-2mpnv" Dec 05 01:31:09 crc kubenswrapper[4990]: I1205 01:31:09.310669 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmvq7\" (UniqueName: \"kubernetes.io/projected/f690712b-647d-455f-af3b-adbe25e2662d-kube-api-access-vmvq7\") pod \"glance-db-create-rpg97\" (UID: \"f690712b-647d-455f-af3b-adbe25e2662d\") " pod="openstack/glance-db-create-rpg97" Dec 05 01:31:09 crc kubenswrapper[4990]: I1205 01:31:09.311674 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f690712b-647d-455f-af3b-adbe25e2662d-operator-scripts\") pod \"glance-db-create-rpg97\" (UID: \"f690712b-647d-455f-af3b-adbe25e2662d\") " pod="openstack/glance-db-create-rpg97" Dec 05 01:31:09 crc kubenswrapper[4990]: I1205 01:31:09.311679 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31b24fcb-1091-4e53-95d0-b1133f1a9b92-operator-scripts\") pod \"glance-bd6b-account-create-update-2mpnv\" (UID: \"31b24fcb-1091-4e53-95d0-b1133f1a9b92\") " pod="openstack/glance-bd6b-account-create-update-2mpnv" Dec 05 01:31:09 crc kubenswrapper[4990]: I1205 01:31:09.330979 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmvq7\" (UniqueName: \"kubernetes.io/projected/f690712b-647d-455f-af3b-adbe25e2662d-kube-api-access-vmvq7\") pod \"glance-db-create-rpg97\" (UID: \"f690712b-647d-455f-af3b-adbe25e2662d\") " pod="openstack/glance-db-create-rpg97" Dec 05 01:31:09 crc kubenswrapper[4990]: I1205 01:31:09.332908 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc4hz\" (UniqueName: \"kubernetes.io/projected/31b24fcb-1091-4e53-95d0-b1133f1a9b92-kube-api-access-bc4hz\") pod \"glance-bd6b-account-create-update-2mpnv\" (UID: \"31b24fcb-1091-4e53-95d0-b1133f1a9b92\") " pod="openstack/glance-bd6b-account-create-update-2mpnv" Dec 05 01:31:09 crc kubenswrapper[4990]: I1205 01:31:09.363875 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-gml76" event={"ID":"513896a8-02b3-417f-95a4-7ec45b07b61a","Type":"ContainerStarted","Data":"81233f1818872534cd91c352e9b720d089f95fa168ff458a88d040368e7d50e1"} Dec 05 01:31:09 crc kubenswrapper[4990]: I1205 01:31:09.364142 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-gml76" Dec 05 01:31:09 crc kubenswrapper[4990]: I1205 01:31:09.366153 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-dlzsv" event={"ID":"92d6858f-97af-4243-98e9-9322238ca042","Type":"ContainerDied","Data":"daadd366def0e5e06427e8be39162a4c5501f368c9e441d86f433143b18e487a"} Dec 05 01:31:09 crc kubenswrapper[4990]: I1205 01:31:09.366191 4990 scope.go:117] "RemoveContainer" containerID="158ae37d6f3982ba5d59cd75970abe6b8cd22dc9cc6fc60e81f8b4e34e9db6cb" Dec 05 01:31:09 crc kubenswrapper[4990]: I1205 01:31:09.366306 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-dlzsv" Dec 05 01:31:09 crc kubenswrapper[4990]: I1205 01:31:09.385118 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-gml76" podStartSLOduration=4.385101334 podStartE2EDuration="4.385101334s" podCreationTimestamp="2025-12-05 01:31:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:31:09.380100492 +0000 UTC m=+1367.756315853" watchObservedRunningTime="2025-12-05 01:31:09.385101334 +0000 UTC m=+1367.761316695" Dec 05 01:31:09 crc kubenswrapper[4990]: I1205 01:31:09.452729 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-dlzsv"] Dec 05 01:31:09 crc kubenswrapper[4990]: I1205 01:31:09.458461 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-dlzsv"] Dec 05 01:31:09 crc kubenswrapper[4990]: I1205 01:31:09.489878 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bd6b-account-create-update-2mpnv" Dec 05 01:31:09 crc kubenswrapper[4990]: I1205 01:31:09.503622 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rpg97" Dec 05 01:31:09 crc kubenswrapper[4990]: I1205 01:31:09.949329 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92d6858f-97af-4243-98e9-9322238ca042" path="/var/lib/kubelet/pods/92d6858f-97af-4243-98e9-9322238ca042/volumes" Dec 05 01:31:09 crc kubenswrapper[4990]: I1205 01:31:09.951289 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-rpg97"] Dec 05 01:31:10 crc kubenswrapper[4990]: I1205 01:31:10.008849 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-bd6b-account-create-update-2mpnv"] Dec 05 01:31:10 crc kubenswrapper[4990]: I1205 01:31:10.376837 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454","Type":"ContainerStarted","Data":"77eda99de79f1606c252cb06ece67f2f6f226ccf89000f0de068f41aaab2a00c"} Dec 05 01:31:10 crc kubenswrapper[4990]: W1205 01:31:10.454311 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf690712b_647d_455f_af3b_adbe25e2662d.slice/crio-4481785d7baf930a04a81de44ec9906b922b0b07b9c6f7123c5201962c9ad6d1 WatchSource:0}: Error finding container 4481785d7baf930a04a81de44ec9906b922b0b07b9c6f7123c5201962c9ad6d1: Status 404 returned error can't find the container with id 4481785d7baf930a04a81de44ec9906b922b0b07b9c6f7123c5201962c9ad6d1 Dec 05 01:31:10 crc kubenswrapper[4990]: I1205 01:31:10.837142 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-etc-swift\") pod \"swift-storage-0\" (UID: \"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3\") " pod="openstack/swift-storage-0" Dec 05 01:31:10 crc kubenswrapper[4990]: E1205 01:31:10.837506 4990 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 05 01:31:10 crc kubenswrapper[4990]: E1205 01:31:10.837547 4990 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 05 01:31:10 crc kubenswrapper[4990]: E1205 01:31:10.837594 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-etc-swift podName:c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3 nodeName:}" failed. No retries permitted until 2025-12-05 01:31:14.837579467 +0000 UTC m=+1373.213794828 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-etc-swift") pod "swift-storage-0" (UID: "c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3") : configmap "swift-ring-files" not found Dec 05 01:31:11 crc kubenswrapper[4990]: W1205 01:31:11.298874 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31b24fcb_1091_4e53_95d0_b1133f1a9b92.slice/crio-1f6149e84c48b482577995efb57273ce873459746741cca2a62fcd8c5ae6ff71 WatchSource:0}: Error finding container 1f6149e84c48b482577995efb57273ce873459746741cca2a62fcd8c5ae6ff71: Status 404 returned error can't find the container with id 1f6149e84c48b482577995efb57273ce873459746741cca2a62fcd8c5ae6ff71 Dec 05 01:31:11 crc kubenswrapper[4990]: I1205 01:31:11.390080 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rpg97" event={"ID":"f690712b-647d-455f-af3b-adbe25e2662d","Type":"ContainerStarted","Data":"4481785d7baf930a04a81de44ec9906b922b0b07b9c6f7123c5201962c9ad6d1"} Dec 05 01:31:11 crc kubenswrapper[4990]: I1205 01:31:11.392305 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bd6b-account-create-update-2mpnv" event={"ID":"31b24fcb-1091-4e53-95d0-b1133f1a9b92","Type":"ContainerStarted","Data":"1f6149e84c48b482577995efb57273ce873459746741cca2a62fcd8c5ae6ff71"} Dec 05 01:31:12 crc kubenswrapper[4990]: I1205 01:31:12.404160 4990 generic.go:334] "Generic (PLEG): container finished" podID="31b24fcb-1091-4e53-95d0-b1133f1a9b92" containerID="4ac1fae93e9ab2518d2e7ad50e12a39ab7f54659cf30e784b78090d0efc59346" exitCode=0 Dec 05 01:31:12 crc kubenswrapper[4990]: I1205 01:31:12.404259 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bd6b-account-create-update-2mpnv" event={"ID":"31b24fcb-1091-4e53-95d0-b1133f1a9b92","Type":"ContainerDied","Data":"4ac1fae93e9ab2518d2e7ad50e12a39ab7f54659cf30e784b78090d0efc59346"} Dec 05 01:31:12 crc kubenswrapper[4990]: I1205 01:31:12.406384 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-snp9x" event={"ID":"59d6536f-e0ed-42d2-9676-40bc88de1473","Type":"ContainerStarted","Data":"4ab656dfc2d01202160cd720237d9949e5273fcbe6945b48b1b1bc3f7a17c02a"} Dec 05 01:31:12 crc kubenswrapper[4990]: I1205 01:31:12.410347 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454","Type":"ContainerStarted","Data":"0b1346f688be23b450be3772b345fd19bcc9573a72afce9dae7e5f33c22320e8"} Dec 05 01:31:12 crc kubenswrapper[4990]: I1205 01:31:12.410687 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 05 01:31:12 crc kubenswrapper[4990]: I1205 01:31:12.413112 4990 generic.go:334] "Generic (PLEG): container finished" podID="f690712b-647d-455f-af3b-adbe25e2662d" containerID="50f546f38d10fde078eb863526aabded8f57ec98bea69e0072993fcbb5df0aa5" exitCode=0 Dec 05 01:31:12 crc kubenswrapper[4990]: I1205 01:31:12.413186 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rpg97" event={"ID":"f690712b-647d-455f-af3b-adbe25e2662d","Type":"ContainerDied","Data":"50f546f38d10fde078eb863526aabded8f57ec98bea69e0072993fcbb5df0aa5"} Dec 05 01:31:12 crc kubenswrapper[4990]: I1205 01:31:12.499511 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-snp9x" podStartSLOduration=2.3442640040000002 podStartE2EDuration="5.4994108s" podCreationTimestamp="2025-12-05 01:31:07 +0000 UTC" firstStartedPulling="2025-12-05 01:31:08.244245272 +0000 UTC m=+1366.620460633" lastFinishedPulling="2025-12-05 01:31:11.399392078 +0000 UTC m=+1369.775607429" observedRunningTime="2025-12-05 01:31:12.487236284 +0000 UTC m=+1370.863451675" watchObservedRunningTime="2025-12-05 01:31:12.4994108 +0000 UTC m=+1370.875626201" Dec 05 01:31:12 crc kubenswrapper[4990]: I1205 01:31:12.536661 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=4.53820358 podStartE2EDuration="11.536627906s" podCreationTimestamp="2025-12-05 01:31:01 +0000 UTC" firstStartedPulling="2025-12-05 01:31:02.248292221 +0000 UTC m=+1360.624507582" lastFinishedPulling="2025-12-05 01:31:09.246716547 +0000 UTC m=+1367.622931908" observedRunningTime="2025-12-05 01:31:12.523451262 +0000 UTC m=+1370.899666613" watchObservedRunningTime="2025-12-05 01:31:12.536627906 +0000 UTC m=+1370.912843267" Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.504734 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-kzhdt"] Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.506230 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kzhdt" Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.524509 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-kzhdt"] Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.583017 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6a81-account-create-update-xj7vd"] Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.592574 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6a81-account-create-update-xj7vd" Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.601205 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.619206 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6a81-account-create-update-xj7vd"] Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.627097 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5b4c8de-486d-48ba-9db4-3f50e5b4f958-operator-scripts\") pod \"keystone-db-create-kzhdt\" (UID: \"d5b4c8de-486d-48ba-9db4-3f50e5b4f958\") " pod="openstack/keystone-db-create-kzhdt" Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.627352 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzrnm\" (UniqueName: \"kubernetes.io/projected/d5b4c8de-486d-48ba-9db4-3f50e5b4f958-kube-api-access-kzrnm\") pod \"keystone-db-create-kzhdt\" (UID: \"d5b4c8de-486d-48ba-9db4-3f50e5b4f958\") " pod="openstack/keystone-db-create-kzhdt" Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.729807 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n62n8\" (UniqueName: \"kubernetes.io/projected/a8fe91e1-c026-47ec-a537-4bfd8c6ad73f-kube-api-access-n62n8\") pod \"keystone-6a81-account-create-update-xj7vd\" (UID: \"a8fe91e1-c026-47ec-a537-4bfd8c6ad73f\") " pod="openstack/keystone-6a81-account-create-update-xj7vd" Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.730012 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5b4c8de-486d-48ba-9db4-3f50e5b4f958-operator-scripts\") pod \"keystone-db-create-kzhdt\" (UID: \"d5b4c8de-486d-48ba-9db4-3f50e5b4f958\") " pod="openstack/keystone-db-create-kzhdt" Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.730082 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8fe91e1-c026-47ec-a537-4bfd8c6ad73f-operator-scripts\") pod \"keystone-6a81-account-create-update-xj7vd\" (UID: \"a8fe91e1-c026-47ec-a537-4bfd8c6ad73f\") " pod="openstack/keystone-6a81-account-create-update-xj7vd" Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.730268 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzrnm\" (UniqueName: \"kubernetes.io/projected/d5b4c8de-486d-48ba-9db4-3f50e5b4f958-kube-api-access-kzrnm\") pod \"keystone-db-create-kzhdt\" (UID: \"d5b4c8de-486d-48ba-9db4-3f50e5b4f958\") " pod="openstack/keystone-db-create-kzhdt" Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.731346 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5b4c8de-486d-48ba-9db4-3f50e5b4f958-operator-scripts\") pod \"keystone-db-create-kzhdt\" (UID: \"d5b4c8de-486d-48ba-9db4-3f50e5b4f958\") " pod="openstack/keystone-db-create-kzhdt" Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.772531 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzrnm\" (UniqueName: \"kubernetes.io/projected/d5b4c8de-486d-48ba-9db4-3f50e5b4f958-kube-api-access-kzrnm\") pod \"keystone-db-create-kzhdt\" (UID: \"d5b4c8de-486d-48ba-9db4-3f50e5b4f958\") " pod="openstack/keystone-db-create-kzhdt" Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.785385 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-8d55n"] Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.786849 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8d55n" Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.791326 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-8d55n"] Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.824458 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rpg97" Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.831648 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8fe91e1-c026-47ec-a537-4bfd8c6ad73f-operator-scripts\") pod \"keystone-6a81-account-create-update-xj7vd\" (UID: \"a8fe91e1-c026-47ec-a537-4bfd8c6ad73f\") " pod="openstack/keystone-6a81-account-create-update-xj7vd" Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.831756 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87a54837-9e3c-4d46-a016-c53e3576aad3-operator-scripts\") pod \"placement-db-create-8d55n\" (UID: \"87a54837-9e3c-4d46-a016-c53e3576aad3\") " pod="openstack/placement-db-create-8d55n" Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.831790 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n62n8\" (UniqueName: \"kubernetes.io/projected/a8fe91e1-c026-47ec-a537-4bfd8c6ad73f-kube-api-access-n62n8\") pod \"keystone-6a81-account-create-update-xj7vd\" (UID: \"a8fe91e1-c026-47ec-a537-4bfd8c6ad73f\") " pod="openstack/keystone-6a81-account-create-update-xj7vd" Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.831811 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf6vj\" (UniqueName: \"kubernetes.io/projected/87a54837-9e3c-4d46-a016-c53e3576aad3-kube-api-access-hf6vj\") pod \"placement-db-create-8d55n\" (UID: \"87a54837-9e3c-4d46-a016-c53e3576aad3\") " pod="openstack/placement-db-create-8d55n" Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.832582 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8fe91e1-c026-47ec-a537-4bfd8c6ad73f-operator-scripts\") pod \"keystone-6a81-account-create-update-xj7vd\" (UID: \"a8fe91e1-c026-47ec-a537-4bfd8c6ad73f\") " pod="openstack/keystone-6a81-account-create-update-xj7vd" Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.849878 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kzhdt" Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.857589 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n62n8\" (UniqueName: \"kubernetes.io/projected/a8fe91e1-c026-47ec-a537-4bfd8c6ad73f-kube-api-access-n62n8\") pod \"keystone-6a81-account-create-update-xj7vd\" (UID: \"a8fe91e1-c026-47ec-a537-4bfd8c6ad73f\") " pod="openstack/keystone-6a81-account-create-update-xj7vd" Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.895349 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-06fa-account-create-update-tjxv8"] Dec 05 01:31:13 crc kubenswrapper[4990]: E1205 01:31:13.895764 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f690712b-647d-455f-af3b-adbe25e2662d" containerName="mariadb-database-create" Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.895782 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="f690712b-647d-455f-af3b-adbe25e2662d" containerName="mariadb-database-create" Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.895971 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="f690712b-647d-455f-af3b-adbe25e2662d" containerName="mariadb-database-create" Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.897028 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-06fa-account-create-update-tjxv8" Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.899405 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.905263 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-06fa-account-create-update-tjxv8"] Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.918536 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6a81-account-create-update-xj7vd" Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.932790 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmvq7\" (UniqueName: \"kubernetes.io/projected/f690712b-647d-455f-af3b-adbe25e2662d-kube-api-access-vmvq7\") pod \"f690712b-647d-455f-af3b-adbe25e2662d\" (UID: \"f690712b-647d-455f-af3b-adbe25e2662d\") " Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.932858 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f690712b-647d-455f-af3b-adbe25e2662d-operator-scripts\") pod \"f690712b-647d-455f-af3b-adbe25e2662d\" (UID: \"f690712b-647d-455f-af3b-adbe25e2662d\") " Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.933401 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87a54837-9e3c-4d46-a016-c53e3576aad3-operator-scripts\") pod \"placement-db-create-8d55n\" (UID: \"87a54837-9e3c-4d46-a016-c53e3576aad3\") " pod="openstack/placement-db-create-8d55n" Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.933477 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf6vj\" (UniqueName: \"kubernetes.io/projected/87a54837-9e3c-4d46-a016-c53e3576aad3-kube-api-access-hf6vj\") pod \"placement-db-create-8d55n\" (UID: \"87a54837-9e3c-4d46-a016-c53e3576aad3\") " pod="openstack/placement-db-create-8d55n" Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.933813 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f690712b-647d-455f-af3b-adbe25e2662d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f690712b-647d-455f-af3b-adbe25e2662d" (UID: "f690712b-647d-455f-af3b-adbe25e2662d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.934200 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87a54837-9e3c-4d46-a016-c53e3576aad3-operator-scripts\") pod \"placement-db-create-8d55n\" (UID: \"87a54837-9e3c-4d46-a016-c53e3576aad3\") " pod="openstack/placement-db-create-8d55n" Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.937129 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f690712b-647d-455f-af3b-adbe25e2662d-kube-api-access-vmvq7" (OuterVolumeSpecName: "kube-api-access-vmvq7") pod "f690712b-647d-455f-af3b-adbe25e2662d" (UID: "f690712b-647d-455f-af3b-adbe25e2662d"). InnerVolumeSpecName "kube-api-access-vmvq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.950078 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf6vj\" (UniqueName: \"kubernetes.io/projected/87a54837-9e3c-4d46-a016-c53e3576aad3-kube-api-access-hf6vj\") pod \"placement-db-create-8d55n\" (UID: \"87a54837-9e3c-4d46-a016-c53e3576aad3\") " pod="openstack/placement-db-create-8d55n" Dec 05 01:31:13 crc kubenswrapper[4990]: I1205 01:31:13.954566 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bd6b-account-create-update-2mpnv" Dec 05 01:31:14 crc kubenswrapper[4990]: I1205 01:31:14.034283 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc4hz\" (UniqueName: \"kubernetes.io/projected/31b24fcb-1091-4e53-95d0-b1133f1a9b92-kube-api-access-bc4hz\") pod \"31b24fcb-1091-4e53-95d0-b1133f1a9b92\" (UID: \"31b24fcb-1091-4e53-95d0-b1133f1a9b92\") " Dec 05 01:31:14 crc kubenswrapper[4990]: I1205 01:31:14.034555 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31b24fcb-1091-4e53-95d0-b1133f1a9b92-operator-scripts\") pod \"31b24fcb-1091-4e53-95d0-b1133f1a9b92\" (UID: \"31b24fcb-1091-4e53-95d0-b1133f1a9b92\") " Dec 05 01:31:14 crc kubenswrapper[4990]: I1205 01:31:14.034929 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b952z\" (UniqueName: \"kubernetes.io/projected/2e99c8a9-852a-446a-b4bf-ff8da617539f-kube-api-access-b952z\") pod \"placement-06fa-account-create-update-tjxv8\" (UID: \"2e99c8a9-852a-446a-b4bf-ff8da617539f\") " pod="openstack/placement-06fa-account-create-update-tjxv8" Dec 05 01:31:14 crc kubenswrapper[4990]: I1205 01:31:14.034988 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e99c8a9-852a-446a-b4bf-ff8da617539f-operator-scripts\") pod \"placement-06fa-account-create-update-tjxv8\" (UID: \"2e99c8a9-852a-446a-b4bf-ff8da617539f\") " pod="openstack/placement-06fa-account-create-update-tjxv8" Dec 05 01:31:14 crc kubenswrapper[4990]: I1205 01:31:14.035105 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmvq7\" (UniqueName: \"kubernetes.io/projected/f690712b-647d-455f-af3b-adbe25e2662d-kube-api-access-vmvq7\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:14 crc kubenswrapper[4990]: I1205 01:31:14.035124 4990 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f690712b-647d-455f-af3b-adbe25e2662d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:14 crc kubenswrapper[4990]: I1205 01:31:14.035621 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31b24fcb-1091-4e53-95d0-b1133f1a9b92-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "31b24fcb-1091-4e53-95d0-b1133f1a9b92" (UID: "31b24fcb-1091-4e53-95d0-b1133f1a9b92"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:14 crc kubenswrapper[4990]: I1205 01:31:14.039413 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31b24fcb-1091-4e53-95d0-b1133f1a9b92-kube-api-access-bc4hz" (OuterVolumeSpecName: "kube-api-access-bc4hz") pod "31b24fcb-1091-4e53-95d0-b1133f1a9b92" (UID: "31b24fcb-1091-4e53-95d0-b1133f1a9b92"). InnerVolumeSpecName "kube-api-access-bc4hz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:31:14 crc kubenswrapper[4990]: I1205 01:31:14.136975 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b952z\" (UniqueName: \"kubernetes.io/projected/2e99c8a9-852a-446a-b4bf-ff8da617539f-kube-api-access-b952z\") pod \"placement-06fa-account-create-update-tjxv8\" (UID: \"2e99c8a9-852a-446a-b4bf-ff8da617539f\") " pod="openstack/placement-06fa-account-create-update-tjxv8" Dec 05 01:31:14 crc kubenswrapper[4990]: I1205 01:31:14.137028 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e99c8a9-852a-446a-b4bf-ff8da617539f-operator-scripts\") pod \"placement-06fa-account-create-update-tjxv8\" (UID: \"2e99c8a9-852a-446a-b4bf-ff8da617539f\") " pod="openstack/placement-06fa-account-create-update-tjxv8" Dec 05 01:31:14 crc kubenswrapper[4990]: I1205 01:31:14.137156 4990 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31b24fcb-1091-4e53-95d0-b1133f1a9b92-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:14 crc kubenswrapper[4990]: I1205 01:31:14.137168 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc4hz\" (UniqueName: \"kubernetes.io/projected/31b24fcb-1091-4e53-95d0-b1133f1a9b92-kube-api-access-bc4hz\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:14 crc kubenswrapper[4990]: I1205 01:31:14.137948 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e99c8a9-852a-446a-b4bf-ff8da617539f-operator-scripts\") pod \"placement-06fa-account-create-update-tjxv8\" (UID: \"2e99c8a9-852a-446a-b4bf-ff8da617539f\") " pod="openstack/placement-06fa-account-create-update-tjxv8" Dec 05 01:31:14 crc kubenswrapper[4990]: I1205 01:31:14.140373 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8d55n" Dec 05 01:31:14 crc kubenswrapper[4990]: I1205 01:31:14.163109 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b952z\" (UniqueName: \"kubernetes.io/projected/2e99c8a9-852a-446a-b4bf-ff8da617539f-kube-api-access-b952z\") pod \"placement-06fa-account-create-update-tjxv8\" (UID: \"2e99c8a9-852a-446a-b4bf-ff8da617539f\") " pod="openstack/placement-06fa-account-create-update-tjxv8" Dec 05 01:31:14 crc kubenswrapper[4990]: I1205 01:31:14.280025 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-06fa-account-create-update-tjxv8" Dec 05 01:31:14 crc kubenswrapper[4990]: I1205 01:31:14.321454 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-kzhdt"] Dec 05 01:31:14 crc kubenswrapper[4990]: W1205 01:31:14.330619 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5b4c8de_486d_48ba_9db4_3f50e5b4f958.slice/crio-82da9625e7d5a7ec739b1439e78cfe0f5d885fd99f310361b8abb36d9609730e WatchSource:0}: Error finding container 82da9625e7d5a7ec739b1439e78cfe0f5d885fd99f310361b8abb36d9609730e: Status 404 returned error can't find the container with id 82da9625e7d5a7ec739b1439e78cfe0f5d885fd99f310361b8abb36d9609730e Dec 05 01:31:14 crc kubenswrapper[4990]: I1205 01:31:14.373088 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6a81-account-create-update-xj7vd"] Dec 05 01:31:14 crc kubenswrapper[4990]: I1205 01:31:14.447379 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6a81-account-create-update-xj7vd" event={"ID":"a8fe91e1-c026-47ec-a537-4bfd8c6ad73f","Type":"ContainerStarted","Data":"5ba32254bd6d373deae939bc710577bbf078e805989719f5a02a390c6c144c3f"} Dec 05 01:31:14 crc kubenswrapper[4990]: I1205 01:31:14.451333 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rpg97" event={"ID":"f690712b-647d-455f-af3b-adbe25e2662d","Type":"ContainerDied","Data":"4481785d7baf930a04a81de44ec9906b922b0b07b9c6f7123c5201962c9ad6d1"} Dec 05 01:31:14 crc kubenswrapper[4990]: I1205 01:31:14.451360 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rpg97" Dec 05 01:31:14 crc kubenswrapper[4990]: I1205 01:31:14.451382 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4481785d7baf930a04a81de44ec9906b922b0b07b9c6f7123c5201962c9ad6d1" Dec 05 01:31:14 crc kubenswrapper[4990]: I1205 01:31:14.455820 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bd6b-account-create-update-2mpnv" Dec 05 01:31:14 crc kubenswrapper[4990]: I1205 01:31:14.455809 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bd6b-account-create-update-2mpnv" event={"ID":"31b24fcb-1091-4e53-95d0-b1133f1a9b92","Type":"ContainerDied","Data":"1f6149e84c48b482577995efb57273ce873459746741cca2a62fcd8c5ae6ff71"} Dec 05 01:31:14 crc kubenswrapper[4990]: I1205 01:31:14.455958 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f6149e84c48b482577995efb57273ce873459746741cca2a62fcd8c5ae6ff71" Dec 05 01:31:14 crc kubenswrapper[4990]: I1205 01:31:14.456980 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-kzhdt" event={"ID":"d5b4c8de-486d-48ba-9db4-3f50e5b4f958","Type":"ContainerStarted","Data":"82da9625e7d5a7ec739b1439e78cfe0f5d885fd99f310361b8abb36d9609730e"} Dec 05 01:31:14 crc kubenswrapper[4990]: I1205 01:31:14.564651 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-8d55n"] Dec 05 01:31:14 crc kubenswrapper[4990]: W1205 01:31:14.565081 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87a54837_9e3c_4d46_a016_c53e3576aad3.slice/crio-b14c47f03b67edef19bbd89d8f8ec6bf3bed05467a33f8add52a8b3003b82ff6 WatchSource:0}: Error finding container b14c47f03b67edef19bbd89d8f8ec6bf3bed05467a33f8add52a8b3003b82ff6: Status 404 returned error can't find the container with id b14c47f03b67edef19bbd89d8f8ec6bf3bed05467a33f8add52a8b3003b82ff6 Dec 05 01:31:14 crc kubenswrapper[4990]: I1205 01:31:14.729418 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-06fa-account-create-update-tjxv8"] Dec 05 01:31:14 crc kubenswrapper[4990]: W1205 01:31:14.738526 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e99c8a9_852a_446a_b4bf_ff8da617539f.slice/crio-e152c67dfbe06208e8025cbf37fc1ad697afa40b4bdd79eafd75d2065d12743b WatchSource:0}: Error finding container e152c67dfbe06208e8025cbf37fc1ad697afa40b4bdd79eafd75d2065d12743b: Status 404 returned error can't find the container with id e152c67dfbe06208e8025cbf37fc1ad697afa40b4bdd79eafd75d2065d12743b Dec 05 01:31:14 crc kubenswrapper[4990]: I1205 01:31:14.856294 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-etc-swift\") pod \"swift-storage-0\" (UID: \"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3\") " pod="openstack/swift-storage-0" Dec 05 01:31:14 crc kubenswrapper[4990]: E1205 01:31:14.856505 4990 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 05 01:31:14 crc kubenswrapper[4990]: E1205 01:31:14.856533 4990 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 05 01:31:14 crc kubenswrapper[4990]: E1205 01:31:14.856596 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-etc-swift podName:c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3 nodeName:}" failed. No retries permitted until 2025-12-05 01:31:22.856572523 +0000 UTC m=+1381.232787884 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-etc-swift") pod "swift-storage-0" (UID: "c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3") : configmap "swift-ring-files" not found Dec 05 01:31:15 crc kubenswrapper[4990]: I1205 01:31:15.469189 4990 generic.go:334] "Generic (PLEG): container finished" podID="87a54837-9e3c-4d46-a016-c53e3576aad3" containerID="4dc2b7b58dbdfb63d630c0e7a057d062e10619c16da2cfb39d4635ab0061b0d1" exitCode=0 Dec 05 01:31:15 crc kubenswrapper[4990]: I1205 01:31:15.469832 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8d55n" event={"ID":"87a54837-9e3c-4d46-a016-c53e3576aad3","Type":"ContainerDied","Data":"4dc2b7b58dbdfb63d630c0e7a057d062e10619c16da2cfb39d4635ab0061b0d1"} Dec 05 01:31:15 crc kubenswrapper[4990]: I1205 01:31:15.470294 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8d55n" event={"ID":"87a54837-9e3c-4d46-a016-c53e3576aad3","Type":"ContainerStarted","Data":"b14c47f03b67edef19bbd89d8f8ec6bf3bed05467a33f8add52a8b3003b82ff6"} Dec 05 01:31:15 crc kubenswrapper[4990]: I1205 01:31:15.473250 4990 generic.go:334] "Generic (PLEG): container finished" podID="d5b4c8de-486d-48ba-9db4-3f50e5b4f958" containerID="cc14d93c184af156cb1266a8caa50ed4d94fad3578f8bdedeface96926975500" exitCode=0 Dec 05 01:31:15 crc kubenswrapper[4990]: I1205 01:31:15.473317 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-kzhdt" event={"ID":"d5b4c8de-486d-48ba-9db4-3f50e5b4f958","Type":"ContainerDied","Data":"cc14d93c184af156cb1266a8caa50ed4d94fad3578f8bdedeface96926975500"} Dec 05 01:31:15 crc kubenswrapper[4990]: I1205 01:31:15.475608 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-06fa-account-create-update-tjxv8" event={"ID":"2e99c8a9-852a-446a-b4bf-ff8da617539f","Type":"ContainerStarted","Data":"b422db709e2c315f6b31eb654667cb25b1c749864ca47aab010243422f9b6ce3"} Dec 05 01:31:15 crc kubenswrapper[4990]: I1205 01:31:15.475658 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-06fa-account-create-update-tjxv8" event={"ID":"2e99c8a9-852a-446a-b4bf-ff8da617539f","Type":"ContainerStarted","Data":"e152c67dfbe06208e8025cbf37fc1ad697afa40b4bdd79eafd75d2065d12743b"} Dec 05 01:31:15 crc kubenswrapper[4990]: I1205 01:31:15.477749 4990 generic.go:334] "Generic (PLEG): container finished" podID="a8fe91e1-c026-47ec-a537-4bfd8c6ad73f" containerID="d9bd1be2f40dad0b0e75e9a6eb2169029c19d587cfd8318968d95a88ed6d8e32" exitCode=0 Dec 05 01:31:15 crc kubenswrapper[4990]: I1205 01:31:15.477790 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6a81-account-create-update-xj7vd" event={"ID":"a8fe91e1-c026-47ec-a537-4bfd8c6ad73f","Type":"ContainerDied","Data":"d9bd1be2f40dad0b0e75e9a6eb2169029c19d587cfd8318968d95a88ed6d8e32"} Dec 05 01:31:15 crc kubenswrapper[4990]: I1205 01:31:15.528551 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-06fa-account-create-update-tjxv8" podStartSLOduration=2.5285241689999998 podStartE2EDuration="2.528524169s" podCreationTimestamp="2025-12-05 01:31:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:31:15.52328218 +0000 UTC m=+1373.899497571" watchObservedRunningTime="2025-12-05 01:31:15.528524169 +0000 UTC m=+1373.904739530" Dec 05 01:31:16 crc kubenswrapper[4990]: I1205 01:31:16.185642 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-gml76" Dec 05 01:31:16 crc kubenswrapper[4990]: I1205 01:31:16.257717 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-qv2fl"] Dec 05 01:31:16 crc kubenswrapper[4990]: I1205 01:31:16.257997 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc8479f9-qv2fl" podUID="d59839fd-b60f-4cb2-bfb7-166ad40576f2" containerName="dnsmasq-dns" containerID="cri-o://93c98f24840aff704e4595afe795207607331811deed7c78170c2569b8c2d970" gracePeriod=10 Dec 05 01:31:16 crc kubenswrapper[4990]: I1205 01:31:16.487786 4990 generic.go:334] "Generic (PLEG): container finished" podID="d59839fd-b60f-4cb2-bfb7-166ad40576f2" containerID="93c98f24840aff704e4595afe795207607331811deed7c78170c2569b8c2d970" exitCode=0 Dec 05 01:31:16 crc kubenswrapper[4990]: I1205 01:31:16.487820 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-qv2fl" event={"ID":"d59839fd-b60f-4cb2-bfb7-166ad40576f2","Type":"ContainerDied","Data":"93c98f24840aff704e4595afe795207607331811deed7c78170c2569b8c2d970"} Dec 05 01:31:16 crc kubenswrapper[4990]: I1205 01:31:16.493815 4990 generic.go:334] "Generic (PLEG): container finished" podID="2e99c8a9-852a-446a-b4bf-ff8da617539f" containerID="b422db709e2c315f6b31eb654667cb25b1c749864ca47aab010243422f9b6ce3" exitCode=0 Dec 05 01:31:16 crc kubenswrapper[4990]: I1205 01:31:16.493899 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-06fa-account-create-update-tjxv8" event={"ID":"2e99c8a9-852a-446a-b4bf-ff8da617539f","Type":"ContainerDied","Data":"b422db709e2c315f6b31eb654667cb25b1c749864ca47aab010243422f9b6ce3"} Dec 05 01:31:16 crc kubenswrapper[4990]: I1205 01:31:16.765661 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-qv2fl" Dec 05 01:31:16 crc kubenswrapper[4990]: I1205 01:31:16.789751 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d59839fd-b60f-4cb2-bfb7-166ad40576f2-config\") pod \"d59839fd-b60f-4cb2-bfb7-166ad40576f2\" (UID: \"d59839fd-b60f-4cb2-bfb7-166ad40576f2\") " Dec 05 01:31:16 crc kubenswrapper[4990]: I1205 01:31:16.789959 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d59839fd-b60f-4cb2-bfb7-166ad40576f2-dns-svc\") pod \"d59839fd-b60f-4cb2-bfb7-166ad40576f2\" (UID: \"d59839fd-b60f-4cb2-bfb7-166ad40576f2\") " Dec 05 01:31:16 crc kubenswrapper[4990]: I1205 01:31:16.790025 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42qxj\" (UniqueName: \"kubernetes.io/projected/d59839fd-b60f-4cb2-bfb7-166ad40576f2-kube-api-access-42qxj\") pod \"d59839fd-b60f-4cb2-bfb7-166ad40576f2\" (UID: \"d59839fd-b60f-4cb2-bfb7-166ad40576f2\") " Dec 05 01:31:16 crc kubenswrapper[4990]: I1205 01:31:16.877588 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d59839fd-b60f-4cb2-bfb7-166ad40576f2-kube-api-access-42qxj" (OuterVolumeSpecName: "kube-api-access-42qxj") pod "d59839fd-b60f-4cb2-bfb7-166ad40576f2" (UID: "d59839fd-b60f-4cb2-bfb7-166ad40576f2"). InnerVolumeSpecName "kube-api-access-42qxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:31:16 crc kubenswrapper[4990]: I1205 01:31:16.891874 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42qxj\" (UniqueName: \"kubernetes.io/projected/d59839fd-b60f-4cb2-bfb7-166ad40576f2-kube-api-access-42qxj\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:16 crc kubenswrapper[4990]: I1205 01:31:16.902352 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d59839fd-b60f-4cb2-bfb7-166ad40576f2-config" (OuterVolumeSpecName: "config") pod "d59839fd-b60f-4cb2-bfb7-166ad40576f2" (UID: "d59839fd-b60f-4cb2-bfb7-166ad40576f2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:16 crc kubenswrapper[4990]: I1205 01:31:16.907272 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d59839fd-b60f-4cb2-bfb7-166ad40576f2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d59839fd-b60f-4cb2-bfb7-166ad40576f2" (UID: "d59839fd-b60f-4cb2-bfb7-166ad40576f2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:16 crc kubenswrapper[4990]: I1205 01:31:16.993075 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d59839fd-b60f-4cb2-bfb7-166ad40576f2-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:16 crc kubenswrapper[4990]: I1205 01:31:16.993107 4990 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d59839fd-b60f-4cb2-bfb7-166ad40576f2-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.019356 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6a81-account-create-update-xj7vd" Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.093863 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n62n8\" (UniqueName: \"kubernetes.io/projected/a8fe91e1-c026-47ec-a537-4bfd8c6ad73f-kube-api-access-n62n8\") pod \"a8fe91e1-c026-47ec-a537-4bfd8c6ad73f\" (UID: \"a8fe91e1-c026-47ec-a537-4bfd8c6ad73f\") " Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.094084 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8fe91e1-c026-47ec-a537-4bfd8c6ad73f-operator-scripts\") pod \"a8fe91e1-c026-47ec-a537-4bfd8c6ad73f\" (UID: \"a8fe91e1-c026-47ec-a537-4bfd8c6ad73f\") " Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.094871 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8fe91e1-c026-47ec-a537-4bfd8c6ad73f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a8fe91e1-c026-47ec-a537-4bfd8c6ad73f" (UID: "a8fe91e1-c026-47ec-a537-4bfd8c6ad73f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.097043 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8fe91e1-c026-47ec-a537-4bfd8c6ad73f-kube-api-access-n62n8" (OuterVolumeSpecName: "kube-api-access-n62n8") pod "a8fe91e1-c026-47ec-a537-4bfd8c6ad73f" (UID: "a8fe91e1-c026-47ec-a537-4bfd8c6ad73f"). InnerVolumeSpecName "kube-api-access-n62n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.107200 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8d55n" Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.133554 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kzhdt" Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.195109 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87a54837-9e3c-4d46-a016-c53e3576aad3-operator-scripts\") pod \"87a54837-9e3c-4d46-a016-c53e3576aad3\" (UID: \"87a54837-9e3c-4d46-a016-c53e3576aad3\") " Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.195266 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf6vj\" (UniqueName: \"kubernetes.io/projected/87a54837-9e3c-4d46-a016-c53e3576aad3-kube-api-access-hf6vj\") pod \"87a54837-9e3c-4d46-a016-c53e3576aad3\" (UID: \"87a54837-9e3c-4d46-a016-c53e3576aad3\") " Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.195306 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzrnm\" (UniqueName: \"kubernetes.io/projected/d5b4c8de-486d-48ba-9db4-3f50e5b4f958-kube-api-access-kzrnm\") pod \"d5b4c8de-486d-48ba-9db4-3f50e5b4f958\" (UID: \"d5b4c8de-486d-48ba-9db4-3f50e5b4f958\") " Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.195454 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5b4c8de-486d-48ba-9db4-3f50e5b4f958-operator-scripts\") pod \"d5b4c8de-486d-48ba-9db4-3f50e5b4f958\" (UID: \"d5b4c8de-486d-48ba-9db4-3f50e5b4f958\") " Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.196047 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5b4c8de-486d-48ba-9db4-3f50e5b4f958-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d5b4c8de-486d-48ba-9db4-3f50e5b4f958" (UID: "d5b4c8de-486d-48ba-9db4-3f50e5b4f958"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.196110 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87a54837-9e3c-4d46-a016-c53e3576aad3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "87a54837-9e3c-4d46-a016-c53e3576aad3" (UID: "87a54837-9e3c-4d46-a016-c53e3576aad3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.196188 4990 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5b4c8de-486d-48ba-9db4-3f50e5b4f958-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.196200 4990 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8fe91e1-c026-47ec-a537-4bfd8c6ad73f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.196209 4990 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87a54837-9e3c-4d46-a016-c53e3576aad3-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.196234 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n62n8\" (UniqueName: \"kubernetes.io/projected/a8fe91e1-c026-47ec-a537-4bfd8c6ad73f-kube-api-access-n62n8\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.198791 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5b4c8de-486d-48ba-9db4-3f50e5b4f958-kube-api-access-kzrnm" (OuterVolumeSpecName: "kube-api-access-kzrnm") pod "d5b4c8de-486d-48ba-9db4-3f50e5b4f958" (UID: "d5b4c8de-486d-48ba-9db4-3f50e5b4f958"). InnerVolumeSpecName "kube-api-access-kzrnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.198826 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87a54837-9e3c-4d46-a016-c53e3576aad3-kube-api-access-hf6vj" (OuterVolumeSpecName: "kube-api-access-hf6vj") pod "87a54837-9e3c-4d46-a016-c53e3576aad3" (UID: "87a54837-9e3c-4d46-a016-c53e3576aad3"). InnerVolumeSpecName "kube-api-access-hf6vj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.297552 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf6vj\" (UniqueName: \"kubernetes.io/projected/87a54837-9e3c-4d46-a016-c53e3576aad3-kube-api-access-hf6vj\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.297589 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzrnm\" (UniqueName: \"kubernetes.io/projected/d5b4c8de-486d-48ba-9db4-3f50e5b4f958-kube-api-access-kzrnm\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.511107 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-kzhdt" event={"ID":"d5b4c8de-486d-48ba-9db4-3f50e5b4f958","Type":"ContainerDied","Data":"82da9625e7d5a7ec739b1439e78cfe0f5d885fd99f310361b8abb36d9609730e"} Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.511149 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kzhdt" Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.511165 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82da9625e7d5a7ec739b1439e78cfe0f5d885fd99f310361b8abb36d9609730e" Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.514422 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6a81-account-create-update-xj7vd" Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.514431 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6a81-account-create-update-xj7vd" event={"ID":"a8fe91e1-c026-47ec-a537-4bfd8c6ad73f","Type":"ContainerDied","Data":"5ba32254bd6d373deae939bc710577bbf078e805989719f5a02a390c6c144c3f"} Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.514510 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ba32254bd6d373deae939bc710577bbf078e805989719f5a02a390c6c144c3f" Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.516851 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8d55n" event={"ID":"87a54837-9e3c-4d46-a016-c53e3576aad3","Type":"ContainerDied","Data":"b14c47f03b67edef19bbd89d8f8ec6bf3bed05467a33f8add52a8b3003b82ff6"} Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.516892 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b14c47f03b67edef19bbd89d8f8ec6bf3bed05467a33f8add52a8b3003b82ff6" Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.516964 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8d55n" Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.525028 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-qv2fl" Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.531326 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-qv2fl" event={"ID":"d59839fd-b60f-4cb2-bfb7-166ad40576f2","Type":"ContainerDied","Data":"4304e6c8b4f52a819676c1c0234a78ce649800a398abe5e62c92094b71cb0756"} Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.531354 4990 scope.go:117] "RemoveContainer" containerID="93c98f24840aff704e4595afe795207607331811deed7c78170c2569b8c2d970" Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.571960 4990 scope.go:117] "RemoveContainer" containerID="a94afd469f5b53e9a5f675fa56dd43328148fe5a340a5b3a4e73d7e4025d4623" Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.610060 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-qv2fl"] Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.617366 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-qv2fl"] Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.939949 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-06fa-account-create-update-tjxv8" Dec 05 01:31:17 crc kubenswrapper[4990]: I1205 01:31:17.944405 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d59839fd-b60f-4cb2-bfb7-166ad40576f2" path="/var/lib/kubelet/pods/d59839fd-b60f-4cb2-bfb7-166ad40576f2/volumes" Dec 05 01:31:18 crc kubenswrapper[4990]: I1205 01:31:18.012915 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b952z\" (UniqueName: \"kubernetes.io/projected/2e99c8a9-852a-446a-b4bf-ff8da617539f-kube-api-access-b952z\") pod \"2e99c8a9-852a-446a-b4bf-ff8da617539f\" (UID: \"2e99c8a9-852a-446a-b4bf-ff8da617539f\") " Dec 05 01:31:18 crc kubenswrapper[4990]: I1205 01:31:18.013132 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e99c8a9-852a-446a-b4bf-ff8da617539f-operator-scripts\") pod \"2e99c8a9-852a-446a-b4bf-ff8da617539f\" (UID: \"2e99c8a9-852a-446a-b4bf-ff8da617539f\") " Dec 05 01:31:18 crc kubenswrapper[4990]: I1205 01:31:18.013728 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e99c8a9-852a-446a-b4bf-ff8da617539f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2e99c8a9-852a-446a-b4bf-ff8da617539f" (UID: "2e99c8a9-852a-446a-b4bf-ff8da617539f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:18 crc kubenswrapper[4990]: I1205 01:31:18.021612 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e99c8a9-852a-446a-b4bf-ff8da617539f-kube-api-access-b952z" (OuterVolumeSpecName: "kube-api-access-b952z") pod "2e99c8a9-852a-446a-b4bf-ff8da617539f" (UID: "2e99c8a9-852a-446a-b4bf-ff8da617539f"). InnerVolumeSpecName "kube-api-access-b952z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:31:18 crc kubenswrapper[4990]: I1205 01:31:18.115836 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b952z\" (UniqueName: \"kubernetes.io/projected/2e99c8a9-852a-446a-b4bf-ff8da617539f-kube-api-access-b952z\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:18 crc kubenswrapper[4990]: I1205 01:31:18.115879 4990 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e99c8a9-852a-446a-b4bf-ff8da617539f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:18 crc kubenswrapper[4990]: I1205 01:31:18.540461 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-06fa-account-create-update-tjxv8" Dec 05 01:31:18 crc kubenswrapper[4990]: I1205 01:31:18.540526 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-06fa-account-create-update-tjxv8" event={"ID":"2e99c8a9-852a-446a-b4bf-ff8da617539f","Type":"ContainerDied","Data":"e152c67dfbe06208e8025cbf37fc1ad697afa40b4bdd79eafd75d2065d12743b"} Dec 05 01:31:18 crc kubenswrapper[4990]: I1205 01:31:18.540581 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e152c67dfbe06208e8025cbf37fc1ad697afa40b4bdd79eafd75d2065d12743b" Dec 05 01:31:18 crc kubenswrapper[4990]: I1205 01:31:18.543402 4990 generic.go:334] "Generic (PLEG): container finished" podID="59d6536f-e0ed-42d2-9676-40bc88de1473" containerID="4ab656dfc2d01202160cd720237d9949e5273fcbe6945b48b1b1bc3f7a17c02a" exitCode=0 Dec 05 01:31:18 crc kubenswrapper[4990]: I1205 01:31:18.543577 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-snp9x" event={"ID":"59d6536f-e0ed-42d2-9676-40bc88de1473","Type":"ContainerDied","Data":"4ab656dfc2d01202160cd720237d9949e5273fcbe6945b48b1b1bc3f7a17c02a"} Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.261469 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-56hk7"] Dec 05 01:31:19 crc kubenswrapper[4990]: E1205 01:31:19.262349 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d59839fd-b60f-4cb2-bfb7-166ad40576f2" containerName="init" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.262375 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="d59839fd-b60f-4cb2-bfb7-166ad40576f2" containerName="init" Dec 05 01:31:19 crc kubenswrapper[4990]: E1205 01:31:19.262405 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a54837-9e3c-4d46-a016-c53e3576aad3" containerName="mariadb-database-create" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.262418 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a54837-9e3c-4d46-a016-c53e3576aad3" containerName="mariadb-database-create" Dec 05 01:31:19 crc kubenswrapper[4990]: E1205 01:31:19.262443 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8fe91e1-c026-47ec-a537-4bfd8c6ad73f" containerName="mariadb-account-create-update" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.262459 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8fe91e1-c026-47ec-a537-4bfd8c6ad73f" containerName="mariadb-account-create-update" Dec 05 01:31:19 crc kubenswrapper[4990]: E1205 01:31:19.262544 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d59839fd-b60f-4cb2-bfb7-166ad40576f2" containerName="dnsmasq-dns" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.262561 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="d59839fd-b60f-4cb2-bfb7-166ad40576f2" containerName="dnsmasq-dns" Dec 05 01:31:19 crc kubenswrapper[4990]: E1205 01:31:19.262588 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31b24fcb-1091-4e53-95d0-b1133f1a9b92" containerName="mariadb-account-create-update" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.262605 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="31b24fcb-1091-4e53-95d0-b1133f1a9b92" containerName="mariadb-account-create-update" Dec 05 01:31:19 crc kubenswrapper[4990]: E1205 01:31:19.262653 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5b4c8de-486d-48ba-9db4-3f50e5b4f958" containerName="mariadb-database-create" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.262665 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5b4c8de-486d-48ba-9db4-3f50e5b4f958" containerName="mariadb-database-create" Dec 05 01:31:19 crc kubenswrapper[4990]: E1205 01:31:19.262686 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e99c8a9-852a-446a-b4bf-ff8da617539f" containerName="mariadb-account-create-update" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.262697 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e99c8a9-852a-446a-b4bf-ff8da617539f" containerName="mariadb-account-create-update" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.262980 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e99c8a9-852a-446a-b4bf-ff8da617539f" containerName="mariadb-account-create-update" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.263003 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="d59839fd-b60f-4cb2-bfb7-166ad40576f2" containerName="dnsmasq-dns" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.263027 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="31b24fcb-1091-4e53-95d0-b1133f1a9b92" containerName="mariadb-account-create-update" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.263064 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a54837-9e3c-4d46-a016-c53e3576aad3" containerName="mariadb-database-create" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.263089 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8fe91e1-c026-47ec-a537-4bfd8c6ad73f" containerName="mariadb-account-create-update" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.263108 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5b4c8de-486d-48ba-9db4-3f50e5b4f958" containerName="mariadb-database-create" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.263977 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-56hk7" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.271864 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-sdvf8" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.272029 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.288990 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-56hk7"] Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.338738 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfef6189-60ca-4088-97fa-6dc3fb1e1a52-config-data\") pod \"glance-db-sync-56hk7\" (UID: \"bfef6189-60ca-4088-97fa-6dc3fb1e1a52\") " pod="openstack/glance-db-sync-56hk7" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.338872 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vfwz\" (UniqueName: \"kubernetes.io/projected/bfef6189-60ca-4088-97fa-6dc3fb1e1a52-kube-api-access-9vfwz\") pod \"glance-db-sync-56hk7\" (UID: \"bfef6189-60ca-4088-97fa-6dc3fb1e1a52\") " pod="openstack/glance-db-sync-56hk7" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.339074 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bfef6189-60ca-4088-97fa-6dc3fb1e1a52-db-sync-config-data\") pod \"glance-db-sync-56hk7\" (UID: \"bfef6189-60ca-4088-97fa-6dc3fb1e1a52\") " pod="openstack/glance-db-sync-56hk7" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.339130 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfef6189-60ca-4088-97fa-6dc3fb1e1a52-combined-ca-bundle\") pod \"glance-db-sync-56hk7\" (UID: \"bfef6189-60ca-4088-97fa-6dc3fb1e1a52\") " pod="openstack/glance-db-sync-56hk7" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.440577 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vfwz\" (UniqueName: \"kubernetes.io/projected/bfef6189-60ca-4088-97fa-6dc3fb1e1a52-kube-api-access-9vfwz\") pod \"glance-db-sync-56hk7\" (UID: \"bfef6189-60ca-4088-97fa-6dc3fb1e1a52\") " pod="openstack/glance-db-sync-56hk7" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.440772 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bfef6189-60ca-4088-97fa-6dc3fb1e1a52-db-sync-config-data\") pod \"glance-db-sync-56hk7\" (UID: \"bfef6189-60ca-4088-97fa-6dc3fb1e1a52\") " pod="openstack/glance-db-sync-56hk7" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.440837 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfef6189-60ca-4088-97fa-6dc3fb1e1a52-combined-ca-bundle\") pod \"glance-db-sync-56hk7\" (UID: \"bfef6189-60ca-4088-97fa-6dc3fb1e1a52\") " pod="openstack/glance-db-sync-56hk7" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.440917 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfef6189-60ca-4088-97fa-6dc3fb1e1a52-config-data\") pod \"glance-db-sync-56hk7\" (UID: \"bfef6189-60ca-4088-97fa-6dc3fb1e1a52\") " pod="openstack/glance-db-sync-56hk7" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.448638 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfef6189-60ca-4088-97fa-6dc3fb1e1a52-combined-ca-bundle\") pod \"glance-db-sync-56hk7\" (UID: \"bfef6189-60ca-4088-97fa-6dc3fb1e1a52\") " pod="openstack/glance-db-sync-56hk7" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.449316 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bfef6189-60ca-4088-97fa-6dc3fb1e1a52-db-sync-config-data\") pod \"glance-db-sync-56hk7\" (UID: \"bfef6189-60ca-4088-97fa-6dc3fb1e1a52\") " pod="openstack/glance-db-sync-56hk7" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.449649 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfef6189-60ca-4088-97fa-6dc3fb1e1a52-config-data\") pod \"glance-db-sync-56hk7\" (UID: \"bfef6189-60ca-4088-97fa-6dc3fb1e1a52\") " pod="openstack/glance-db-sync-56hk7" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.470971 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vfwz\" (UniqueName: \"kubernetes.io/projected/bfef6189-60ca-4088-97fa-6dc3fb1e1a52-kube-api-access-9vfwz\") pod \"glance-db-sync-56hk7\" (UID: \"bfef6189-60ca-4088-97fa-6dc3fb1e1a52\") " pod="openstack/glance-db-sync-56hk7" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.586906 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-56hk7" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.918448 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-snp9x" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.949148 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/59d6536f-e0ed-42d2-9676-40bc88de1473-dispersionconf\") pod \"59d6536f-e0ed-42d2-9676-40bc88de1473\" (UID: \"59d6536f-e0ed-42d2-9676-40bc88de1473\") " Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.949201 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/59d6536f-e0ed-42d2-9676-40bc88de1473-scripts\") pod \"59d6536f-e0ed-42d2-9676-40bc88de1473\" (UID: \"59d6536f-e0ed-42d2-9676-40bc88de1473\") " Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.949272 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/59d6536f-e0ed-42d2-9676-40bc88de1473-etc-swift\") pod \"59d6536f-e0ed-42d2-9676-40bc88de1473\" (UID: \"59d6536f-e0ed-42d2-9676-40bc88de1473\") " Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.949316 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bd9k\" (UniqueName: \"kubernetes.io/projected/59d6536f-e0ed-42d2-9676-40bc88de1473-kube-api-access-7bd9k\") pod \"59d6536f-e0ed-42d2-9676-40bc88de1473\" (UID: \"59d6536f-e0ed-42d2-9676-40bc88de1473\") " Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.949361 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59d6536f-e0ed-42d2-9676-40bc88de1473-combined-ca-bundle\") pod \"59d6536f-e0ed-42d2-9676-40bc88de1473\" (UID: \"59d6536f-e0ed-42d2-9676-40bc88de1473\") " Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.949399 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/59d6536f-e0ed-42d2-9676-40bc88de1473-ring-data-devices\") pod \"59d6536f-e0ed-42d2-9676-40bc88de1473\" (UID: \"59d6536f-e0ed-42d2-9676-40bc88de1473\") " Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.949428 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/59d6536f-e0ed-42d2-9676-40bc88de1473-swiftconf\") pod \"59d6536f-e0ed-42d2-9676-40bc88de1473\" (UID: \"59d6536f-e0ed-42d2-9676-40bc88de1473\") " Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.952053 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59d6536f-e0ed-42d2-9676-40bc88de1473-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "59d6536f-e0ed-42d2-9676-40bc88de1473" (UID: "59d6536f-e0ed-42d2-9676-40bc88de1473"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.954325 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59d6536f-e0ed-42d2-9676-40bc88de1473-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "59d6536f-e0ed-42d2-9676-40bc88de1473" (UID: "59d6536f-e0ed-42d2-9676-40bc88de1473"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.956405 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59d6536f-e0ed-42d2-9676-40bc88de1473-kube-api-access-7bd9k" (OuterVolumeSpecName: "kube-api-access-7bd9k") pod "59d6536f-e0ed-42d2-9676-40bc88de1473" (UID: "59d6536f-e0ed-42d2-9676-40bc88de1473"). InnerVolumeSpecName "kube-api-access-7bd9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.959168 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59d6536f-e0ed-42d2-9676-40bc88de1473-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "59d6536f-e0ed-42d2-9676-40bc88de1473" (UID: "59d6536f-e0ed-42d2-9676-40bc88de1473"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.973710 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59d6536f-e0ed-42d2-9676-40bc88de1473-scripts" (OuterVolumeSpecName: "scripts") pod "59d6536f-e0ed-42d2-9676-40bc88de1473" (UID: "59d6536f-e0ed-42d2-9676-40bc88de1473"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.983282 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59d6536f-e0ed-42d2-9676-40bc88de1473-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59d6536f-e0ed-42d2-9676-40bc88de1473" (UID: "59d6536f-e0ed-42d2-9676-40bc88de1473"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:31:19 crc kubenswrapper[4990]: I1205 01:31:19.986317 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59d6536f-e0ed-42d2-9676-40bc88de1473-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "59d6536f-e0ed-42d2-9676-40bc88de1473" (UID: "59d6536f-e0ed-42d2-9676-40bc88de1473"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:31:20 crc kubenswrapper[4990]: I1205 01:31:20.005362 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-56hk7"] Dec 05 01:31:20 crc kubenswrapper[4990]: W1205 01:31:20.006867 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfef6189_60ca_4088_97fa_6dc3fb1e1a52.slice/crio-8ebf7667fb35418e886e7e9080863104534da6753877922f2aae0cc9ffbd5616 WatchSource:0}: Error finding container 8ebf7667fb35418e886e7e9080863104534da6753877922f2aae0cc9ffbd5616: Status 404 returned error can't find the container with id 8ebf7667fb35418e886e7e9080863104534da6753877922f2aae0cc9ffbd5616 Dec 05 01:31:20 crc kubenswrapper[4990]: I1205 01:31:20.050654 4990 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/59d6536f-e0ed-42d2-9676-40bc88de1473-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:20 crc kubenswrapper[4990]: I1205 01:31:20.050680 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bd9k\" (UniqueName: \"kubernetes.io/projected/59d6536f-e0ed-42d2-9676-40bc88de1473-kube-api-access-7bd9k\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:20 crc kubenswrapper[4990]: I1205 01:31:20.050691 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59d6536f-e0ed-42d2-9676-40bc88de1473-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:20 crc kubenswrapper[4990]: I1205 01:31:20.050700 4990 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/59d6536f-e0ed-42d2-9676-40bc88de1473-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:20 crc kubenswrapper[4990]: I1205 01:31:20.050710 4990 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/59d6536f-e0ed-42d2-9676-40bc88de1473-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:20 crc kubenswrapper[4990]: I1205 01:31:20.050718 4990 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/59d6536f-e0ed-42d2-9676-40bc88de1473-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:20 crc kubenswrapper[4990]: I1205 01:31:20.050727 4990 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/59d6536f-e0ed-42d2-9676-40bc88de1473-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:20 crc kubenswrapper[4990]: I1205 01:31:20.565416 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-snp9x" event={"ID":"59d6536f-e0ed-42d2-9676-40bc88de1473","Type":"ContainerDied","Data":"f82de9178fb93c4a32c6359d093ff65e3c56174c69098fdd95ce566603fbde19"} Dec 05 01:31:20 crc kubenswrapper[4990]: I1205 01:31:20.565716 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f82de9178fb93c4a32c6359d093ff65e3c56174c69098fdd95ce566603fbde19" Dec 05 01:31:20 crc kubenswrapper[4990]: I1205 01:31:20.565462 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-snp9x" Dec 05 01:31:20 crc kubenswrapper[4990]: I1205 01:31:20.568604 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-56hk7" event={"ID":"bfef6189-60ca-4088-97fa-6dc3fb1e1a52","Type":"ContainerStarted","Data":"8ebf7667fb35418e886e7e9080863104534da6753877922f2aae0cc9ffbd5616"} Dec 05 01:31:21 crc kubenswrapper[4990]: I1205 01:31:21.869304 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 05 01:31:22 crc kubenswrapper[4990]: I1205 01:31:22.589229 4990 generic.go:334] "Generic (PLEG): container finished" podID="ed473a7a-f068-49a3-ae4c-b57b39e33b28" containerID="bf23153d47795f08a1beb8bebe5fd81358e0773080569922f18d9cf836a35d62" exitCode=0 Dec 05 01:31:22 crc kubenswrapper[4990]: I1205 01:31:22.589310 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ed473a7a-f068-49a3-ae4c-b57b39e33b28","Type":"ContainerDied","Data":"bf23153d47795f08a1beb8bebe5fd81358e0773080569922f18d9cf836a35d62"} Dec 05 01:31:22 crc kubenswrapper[4990]: I1205 01:31:22.591318 4990 generic.go:334] "Generic (PLEG): container finished" podID="809c1920-3205-411c-a8c1-ed027b7e3b1f" containerID="5f5960287e71d7a833bacd70a7ad0510d80b6d222b74bb7c1aa36b55923710c9" exitCode=0 Dec 05 01:31:22 crc kubenswrapper[4990]: I1205 01:31:22.591355 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"809c1920-3205-411c-a8c1-ed027b7e3b1f","Type":"ContainerDied","Data":"5f5960287e71d7a833bacd70a7ad0510d80b6d222b74bb7c1aa36b55923710c9"} Dec 05 01:31:22 crc kubenswrapper[4990]: I1205 01:31:22.909668 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-etc-swift\") pod \"swift-storage-0\" (UID: \"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3\") " pod="openstack/swift-storage-0" Dec 05 01:31:22 crc kubenswrapper[4990]: I1205 01:31:22.917827 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-etc-swift\") pod \"swift-storage-0\" (UID: \"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3\") " pod="openstack/swift-storage-0" Dec 05 01:31:23 crc kubenswrapper[4990]: I1205 01:31:23.188168 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 05 01:31:23 crc kubenswrapper[4990]: I1205 01:31:23.601924 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ed473a7a-f068-49a3-ae4c-b57b39e33b28","Type":"ContainerStarted","Data":"6a124f2ceb58f1b28fd7e33d50fc28756c66696a4774e8efa70e6a53e7a97329"} Dec 05 01:31:23 crc kubenswrapper[4990]: I1205 01:31:23.602410 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:31:23 crc kubenswrapper[4990]: I1205 01:31:23.606000 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"809c1920-3205-411c-a8c1-ed027b7e3b1f","Type":"ContainerStarted","Data":"39a3ea367ecbac2fdb7b56ed37380e3e71e8af696eebed8fe12028523b333328"} Dec 05 01:31:23 crc kubenswrapper[4990]: I1205 01:31:23.606234 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 05 01:31:23 crc kubenswrapper[4990]: I1205 01:31:23.631711 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.836542896 podStartE2EDuration="54.631692981s" podCreationTimestamp="2025-12-05 01:30:29 +0000 UTC" firstStartedPulling="2025-12-05 01:30:31.463620218 +0000 UTC m=+1329.839835579" lastFinishedPulling="2025-12-05 01:30:49.258770263 +0000 UTC m=+1347.634985664" observedRunningTime="2025-12-05 01:31:23.621907713 +0000 UTC m=+1381.998123074" watchObservedRunningTime="2025-12-05 01:31:23.631692981 +0000 UTC m=+1382.007908342" Dec 05 01:31:23 crc kubenswrapper[4990]: I1205 01:31:23.651093 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.449468132 podStartE2EDuration="54.651073031s" podCreationTimestamp="2025-12-05 01:30:29 +0000 UTC" firstStartedPulling="2025-12-05 01:30:31.554392713 +0000 UTC m=+1329.930608064" lastFinishedPulling="2025-12-05 01:30:47.755997592 +0000 UTC m=+1346.132212963" observedRunningTime="2025-12-05 01:31:23.650029481 +0000 UTC m=+1382.026244832" watchObservedRunningTime="2025-12-05 01:31:23.651073031 +0000 UTC m=+1382.027288392" Dec 05 01:31:23 crc kubenswrapper[4990]: I1205 01:31:23.759545 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 05 01:31:23 crc kubenswrapper[4990]: W1205 01:31:23.771634 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3f05c11_ed0c_4e8d_bd22_05c8787cbcc3.slice/crio-614b69fd6049435f72e6968763e104510365cf2a45be10d51544f05bcceefba6 WatchSource:0}: Error finding container 614b69fd6049435f72e6968763e104510365cf2a45be10d51544f05bcceefba6: Status 404 returned error can't find the container with id 614b69fd6049435f72e6968763e104510365cf2a45be10d51544f05bcceefba6 Dec 05 01:31:24 crc kubenswrapper[4990]: I1205 01:31:24.205266 4990 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-nbpzw" podUID="d269e431-18be-4f4a-a63f-fee37cf08d46" containerName="ovn-controller" probeResult="failure" output=< Dec 05 01:31:24 crc kubenswrapper[4990]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 05 01:31:24 crc kubenswrapper[4990]: > Dec 05 01:31:24 crc kubenswrapper[4990]: I1205 01:31:24.616206 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3","Type":"ContainerStarted","Data":"614b69fd6049435f72e6968763e104510365cf2a45be10d51544f05bcceefba6"} Dec 05 01:31:29 crc kubenswrapper[4990]: I1205 01:31:29.214262 4990 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-nbpzw" podUID="d269e431-18be-4f4a-a63f-fee37cf08d46" containerName="ovn-controller" probeResult="failure" output=< Dec 05 01:31:29 crc kubenswrapper[4990]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 05 01:31:29 crc kubenswrapper[4990]: > Dec 05 01:31:29 crc kubenswrapper[4990]: I1205 01:31:29.268029 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-2j9fb" Dec 05 01:31:29 crc kubenswrapper[4990]: I1205 01:31:29.269363 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-2j9fb" Dec 05 01:31:29 crc kubenswrapper[4990]: I1205 01:31:29.503419 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-nbpzw-config-twxw6"] Dec 05 01:31:29 crc kubenswrapper[4990]: E1205 01:31:29.505745 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59d6536f-e0ed-42d2-9676-40bc88de1473" containerName="swift-ring-rebalance" Dec 05 01:31:29 crc kubenswrapper[4990]: I1205 01:31:29.505876 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="59d6536f-e0ed-42d2-9676-40bc88de1473" containerName="swift-ring-rebalance" Dec 05 01:31:29 crc kubenswrapper[4990]: I1205 01:31:29.506168 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="59d6536f-e0ed-42d2-9676-40bc88de1473" containerName="swift-ring-rebalance" Dec 05 01:31:29 crc kubenswrapper[4990]: I1205 01:31:29.506988 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nbpzw-config-twxw6" Dec 05 01:31:29 crc kubenswrapper[4990]: I1205 01:31:29.509948 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 05 01:31:29 crc kubenswrapper[4990]: I1205 01:31:29.524230 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nbpzw-config-twxw6"] Dec 05 01:31:29 crc kubenswrapper[4990]: I1205 01:31:29.637141 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-scripts\") pod \"ovn-controller-nbpzw-config-twxw6\" (UID: \"ae0cbbb9-2bad-48e1-b113-7606e53a2c39\") " pod="openstack/ovn-controller-nbpzw-config-twxw6" Dec 05 01:31:29 crc kubenswrapper[4990]: I1205 01:31:29.637242 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-var-log-ovn\") pod \"ovn-controller-nbpzw-config-twxw6\" (UID: \"ae0cbbb9-2bad-48e1-b113-7606e53a2c39\") " pod="openstack/ovn-controller-nbpzw-config-twxw6" Dec 05 01:31:29 crc kubenswrapper[4990]: I1205 01:31:29.637311 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d6nr\" (UniqueName: \"kubernetes.io/projected/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-kube-api-access-5d6nr\") pod \"ovn-controller-nbpzw-config-twxw6\" (UID: \"ae0cbbb9-2bad-48e1-b113-7606e53a2c39\") " pod="openstack/ovn-controller-nbpzw-config-twxw6" Dec 05 01:31:29 crc kubenswrapper[4990]: I1205 01:31:29.637383 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-additional-scripts\") pod \"ovn-controller-nbpzw-config-twxw6\" (UID: \"ae0cbbb9-2bad-48e1-b113-7606e53a2c39\") " pod="openstack/ovn-controller-nbpzw-config-twxw6" Dec 05 01:31:29 crc kubenswrapper[4990]: I1205 01:31:29.637538 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-var-run-ovn\") pod \"ovn-controller-nbpzw-config-twxw6\" (UID: \"ae0cbbb9-2bad-48e1-b113-7606e53a2c39\") " pod="openstack/ovn-controller-nbpzw-config-twxw6" Dec 05 01:31:29 crc kubenswrapper[4990]: I1205 01:31:29.637583 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-var-run\") pod \"ovn-controller-nbpzw-config-twxw6\" (UID: \"ae0cbbb9-2bad-48e1-b113-7606e53a2c39\") " pod="openstack/ovn-controller-nbpzw-config-twxw6" Dec 05 01:31:29 crc kubenswrapper[4990]: I1205 01:31:29.739445 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-scripts\") pod \"ovn-controller-nbpzw-config-twxw6\" (UID: \"ae0cbbb9-2bad-48e1-b113-7606e53a2c39\") " pod="openstack/ovn-controller-nbpzw-config-twxw6" Dec 05 01:31:29 crc kubenswrapper[4990]: I1205 01:31:29.739518 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-var-log-ovn\") pod \"ovn-controller-nbpzw-config-twxw6\" (UID: \"ae0cbbb9-2bad-48e1-b113-7606e53a2c39\") " pod="openstack/ovn-controller-nbpzw-config-twxw6" Dec 05 01:31:29 crc kubenswrapper[4990]: I1205 01:31:29.739557 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d6nr\" (UniqueName: \"kubernetes.io/projected/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-kube-api-access-5d6nr\") pod \"ovn-controller-nbpzw-config-twxw6\" (UID: \"ae0cbbb9-2bad-48e1-b113-7606e53a2c39\") " pod="openstack/ovn-controller-nbpzw-config-twxw6" Dec 05 01:31:29 crc kubenswrapper[4990]: I1205 01:31:29.739585 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-additional-scripts\") pod \"ovn-controller-nbpzw-config-twxw6\" (UID: \"ae0cbbb9-2bad-48e1-b113-7606e53a2c39\") " pod="openstack/ovn-controller-nbpzw-config-twxw6" Dec 05 01:31:29 crc kubenswrapper[4990]: I1205 01:31:29.739624 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-var-run-ovn\") pod \"ovn-controller-nbpzw-config-twxw6\" (UID: \"ae0cbbb9-2bad-48e1-b113-7606e53a2c39\") " pod="openstack/ovn-controller-nbpzw-config-twxw6" Dec 05 01:31:29 crc kubenswrapper[4990]: I1205 01:31:29.739643 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-var-run\") pod \"ovn-controller-nbpzw-config-twxw6\" (UID: \"ae0cbbb9-2bad-48e1-b113-7606e53a2c39\") " pod="openstack/ovn-controller-nbpzw-config-twxw6" Dec 05 01:31:29 crc kubenswrapper[4990]: I1205 01:31:29.739983 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-var-run\") pod \"ovn-controller-nbpzw-config-twxw6\" (UID: \"ae0cbbb9-2bad-48e1-b113-7606e53a2c39\") " pod="openstack/ovn-controller-nbpzw-config-twxw6" Dec 05 01:31:29 crc kubenswrapper[4990]: I1205 01:31:29.741671 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-var-log-ovn\") pod \"ovn-controller-nbpzw-config-twxw6\" (UID: \"ae0cbbb9-2bad-48e1-b113-7606e53a2c39\") " pod="openstack/ovn-controller-nbpzw-config-twxw6" Dec 05 01:31:29 crc kubenswrapper[4990]: I1205 01:31:29.741734 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-var-run-ovn\") pod \"ovn-controller-nbpzw-config-twxw6\" (UID: \"ae0cbbb9-2bad-48e1-b113-7606e53a2c39\") " pod="openstack/ovn-controller-nbpzw-config-twxw6" Dec 05 01:31:29 crc kubenswrapper[4990]: I1205 01:31:29.742544 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-additional-scripts\") pod \"ovn-controller-nbpzw-config-twxw6\" (UID: \"ae0cbbb9-2bad-48e1-b113-7606e53a2c39\") " pod="openstack/ovn-controller-nbpzw-config-twxw6" Dec 05 01:31:29 crc kubenswrapper[4990]: I1205 01:31:29.742728 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-scripts\") pod \"ovn-controller-nbpzw-config-twxw6\" (UID: \"ae0cbbb9-2bad-48e1-b113-7606e53a2c39\") " pod="openstack/ovn-controller-nbpzw-config-twxw6" Dec 05 01:31:29 crc kubenswrapper[4990]: I1205 01:31:29.764301 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d6nr\" (UniqueName: \"kubernetes.io/projected/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-kube-api-access-5d6nr\") pod \"ovn-controller-nbpzw-config-twxw6\" (UID: \"ae0cbbb9-2bad-48e1-b113-7606e53a2c39\") " pod="openstack/ovn-controller-nbpzw-config-twxw6" Dec 05 01:31:29 crc kubenswrapper[4990]: I1205 01:31:29.837612 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nbpzw-config-twxw6" Dec 05 01:31:33 crc kubenswrapper[4990]: E1205 01:31:33.641599 4990 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Dec 05 01:31:33 crc kubenswrapper[4990]: E1205 01:31:33.642843 4990 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9vfwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-56hk7_openstack(bfef6189-60ca-4088-97fa-6dc3fb1e1a52): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 01:31:33 crc kubenswrapper[4990]: E1205 01:31:33.644058 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-56hk7" podUID="bfef6189-60ca-4088-97fa-6dc3fb1e1a52" Dec 05 01:31:33 crc kubenswrapper[4990]: E1205 01:31:33.693964 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-56hk7" podUID="bfef6189-60ca-4088-97fa-6dc3fb1e1a52" Dec 05 01:31:34 crc kubenswrapper[4990]: I1205 01:31:34.207226 4990 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-nbpzw" podUID="d269e431-18be-4f4a-a63f-fee37cf08d46" containerName="ovn-controller" probeResult="failure" output=< Dec 05 01:31:34 crc kubenswrapper[4990]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 05 01:31:34 crc kubenswrapper[4990]: > Dec 05 01:31:34 crc kubenswrapper[4990]: I1205 01:31:34.262283 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nbpzw-config-twxw6"] Dec 05 01:31:34 crc kubenswrapper[4990]: W1205 01:31:34.273192 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae0cbbb9_2bad_48e1_b113_7606e53a2c39.slice/crio-8c594b17c0b01ddd90337b1297030bce660511f2fa5900af3676964d4ce36568 WatchSource:0}: Error finding container 8c594b17c0b01ddd90337b1297030bce660511f2fa5900af3676964d4ce36568: Status 404 returned error can't find the container with id 8c594b17c0b01ddd90337b1297030bce660511f2fa5900af3676964d4ce36568 Dec 05 01:31:34 crc kubenswrapper[4990]: I1205 01:31:34.709674 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3","Type":"ContainerStarted","Data":"9c1833b8d187a25564fdfc86973fed86449ef89dbe4fd79facc4c862e3f434b1"} Dec 05 01:31:34 crc kubenswrapper[4990]: I1205 01:31:34.710060 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3","Type":"ContainerStarted","Data":"ca0beff3bc28af2d1932bdaaacb1378a2e2c8bdb17dbe086bdacfa5a6b716268"} Dec 05 01:31:34 crc kubenswrapper[4990]: I1205 01:31:34.710075 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3","Type":"ContainerStarted","Data":"d560bf232fe1bd37741c0fe18914a2b867d18453e94b3688a301eb8bd39760b1"} Dec 05 01:31:34 crc kubenswrapper[4990]: I1205 01:31:34.710086 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3","Type":"ContainerStarted","Data":"8eae0c84a22f2ce5fdda06bbd55b6ab37c6416cffdda260c5a8481964de6b976"} Dec 05 01:31:34 crc kubenswrapper[4990]: I1205 01:31:34.713075 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nbpzw-config-twxw6" event={"ID":"ae0cbbb9-2bad-48e1-b113-7606e53a2c39","Type":"ContainerStarted","Data":"3799513540bbb8d8c3fcf656488ca88981ad8aa1bbb031424d8330e5f55dfe03"} Dec 05 01:31:34 crc kubenswrapper[4990]: I1205 01:31:34.713127 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nbpzw-config-twxw6" event={"ID":"ae0cbbb9-2bad-48e1-b113-7606e53a2c39","Type":"ContainerStarted","Data":"8c594b17c0b01ddd90337b1297030bce660511f2fa5900af3676964d4ce36568"} Dec 05 01:31:34 crc kubenswrapper[4990]: I1205 01:31:34.731177 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-nbpzw-config-twxw6" podStartSLOduration=5.731156531 podStartE2EDuration="5.731156531s" podCreationTimestamp="2025-12-05 01:31:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:31:34.72725225 +0000 UTC m=+1393.103467611" watchObservedRunningTime="2025-12-05 01:31:34.731156531 +0000 UTC m=+1393.107371892" Dec 05 01:31:35 crc kubenswrapper[4990]: I1205 01:31:35.723534 4990 generic.go:334] "Generic (PLEG): container finished" podID="ae0cbbb9-2bad-48e1-b113-7606e53a2c39" containerID="3799513540bbb8d8c3fcf656488ca88981ad8aa1bbb031424d8330e5f55dfe03" exitCode=0 Dec 05 01:31:35 crc kubenswrapper[4990]: I1205 01:31:35.723694 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nbpzw-config-twxw6" event={"ID":"ae0cbbb9-2bad-48e1-b113-7606e53a2c39","Type":"ContainerDied","Data":"3799513540bbb8d8c3fcf656488ca88981ad8aa1bbb031424d8330e5f55dfe03"} Dec 05 01:31:35 crc kubenswrapper[4990]: I1205 01:31:35.729929 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3","Type":"ContainerStarted","Data":"d31ccce1b91f8a9a540e5a730943d1aae099c0fe3faf11f048b29a08c7624223"} Dec 05 01:31:36 crc kubenswrapper[4990]: I1205 01:31:36.746259 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3","Type":"ContainerStarted","Data":"78b96b7b027c73093d95b4e9f8ab42d62565ca6239843604ae8aaba9db2a71e8"} Dec 05 01:31:36 crc kubenswrapper[4990]: I1205 01:31:36.746799 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3","Type":"ContainerStarted","Data":"c71d8495c570c44e8d0d8fce230714f4e17b680d377288202cbad2a9a0e42974"} Dec 05 01:31:36 crc kubenswrapper[4990]: I1205 01:31:36.746838 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3","Type":"ContainerStarted","Data":"9fa775c9f947165ed3ddfe5e6cdc672dfd05e2ff806586425cc43ed13294c90e"} Dec 05 01:31:37 crc kubenswrapper[4990]: I1205 01:31:37.097020 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nbpzw-config-twxw6" Dec 05 01:31:37 crc kubenswrapper[4990]: I1205 01:31:37.267754 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-additional-scripts\") pod \"ae0cbbb9-2bad-48e1-b113-7606e53a2c39\" (UID: \"ae0cbbb9-2bad-48e1-b113-7606e53a2c39\") " Dec 05 01:31:37 crc kubenswrapper[4990]: I1205 01:31:37.267874 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-scripts\") pod \"ae0cbbb9-2bad-48e1-b113-7606e53a2c39\" (UID: \"ae0cbbb9-2bad-48e1-b113-7606e53a2c39\") " Dec 05 01:31:37 crc kubenswrapper[4990]: I1205 01:31:37.267920 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5d6nr\" (UniqueName: \"kubernetes.io/projected/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-kube-api-access-5d6nr\") pod \"ae0cbbb9-2bad-48e1-b113-7606e53a2c39\" (UID: \"ae0cbbb9-2bad-48e1-b113-7606e53a2c39\") " Dec 05 01:31:37 crc kubenswrapper[4990]: I1205 01:31:37.267939 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-var-run\") pod \"ae0cbbb9-2bad-48e1-b113-7606e53a2c39\" (UID: \"ae0cbbb9-2bad-48e1-b113-7606e53a2c39\") " Dec 05 01:31:37 crc kubenswrapper[4990]: I1205 01:31:37.268622 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-var-run" (OuterVolumeSpecName: "var-run") pod "ae0cbbb9-2bad-48e1-b113-7606e53a2c39" (UID: "ae0cbbb9-2bad-48e1-b113-7606e53a2c39"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:31:37 crc kubenswrapper[4990]: I1205 01:31:37.268742 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-var-run-ovn\") pod \"ae0cbbb9-2bad-48e1-b113-7606e53a2c39\" (UID: \"ae0cbbb9-2bad-48e1-b113-7606e53a2c39\") " Dec 05 01:31:37 crc kubenswrapper[4990]: I1205 01:31:37.268824 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "ae0cbbb9-2bad-48e1-b113-7606e53a2c39" (UID: "ae0cbbb9-2bad-48e1-b113-7606e53a2c39"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:31:37 crc kubenswrapper[4990]: I1205 01:31:37.269103 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "ae0cbbb9-2bad-48e1-b113-7606e53a2c39" (UID: "ae0cbbb9-2bad-48e1-b113-7606e53a2c39"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:37 crc kubenswrapper[4990]: I1205 01:31:37.269868 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-scripts" (OuterVolumeSpecName: "scripts") pod "ae0cbbb9-2bad-48e1-b113-7606e53a2c39" (UID: "ae0cbbb9-2bad-48e1-b113-7606e53a2c39"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:37 crc kubenswrapper[4990]: I1205 01:31:37.269995 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "ae0cbbb9-2bad-48e1-b113-7606e53a2c39" (UID: "ae0cbbb9-2bad-48e1-b113-7606e53a2c39"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:31:37 crc kubenswrapper[4990]: I1205 01:31:37.270104 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-var-log-ovn\") pod \"ae0cbbb9-2bad-48e1-b113-7606e53a2c39\" (UID: \"ae0cbbb9-2bad-48e1-b113-7606e53a2c39\") " Dec 05 01:31:37 crc kubenswrapper[4990]: I1205 01:31:37.271225 4990 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:37 crc kubenswrapper[4990]: I1205 01:31:37.271270 4990 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-var-run\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:37 crc kubenswrapper[4990]: I1205 01:31:37.271289 4990 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:37 crc kubenswrapper[4990]: I1205 01:31:37.271306 4990 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:37 crc kubenswrapper[4990]: I1205 01:31:37.271325 4990 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:37 crc kubenswrapper[4990]: I1205 01:31:37.276578 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-kube-api-access-5d6nr" (OuterVolumeSpecName: "kube-api-access-5d6nr") pod "ae0cbbb9-2bad-48e1-b113-7606e53a2c39" (UID: "ae0cbbb9-2bad-48e1-b113-7606e53a2c39"). InnerVolumeSpecName "kube-api-access-5d6nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:31:37 crc kubenswrapper[4990]: I1205 01:31:37.357670 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-nbpzw-config-twxw6"] Dec 05 01:31:37 crc kubenswrapper[4990]: I1205 01:31:37.367498 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-nbpzw-config-twxw6"] Dec 05 01:31:37 crc kubenswrapper[4990]: I1205 01:31:37.373620 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5d6nr\" (UniqueName: \"kubernetes.io/projected/ae0cbbb9-2bad-48e1-b113-7606e53a2c39-kube-api-access-5d6nr\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:37 crc kubenswrapper[4990]: I1205 01:31:37.754879 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c594b17c0b01ddd90337b1297030bce660511f2fa5900af3676964d4ce36568" Dec 05 01:31:37 crc kubenswrapper[4990]: I1205 01:31:37.754998 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nbpzw-config-twxw6" Dec 05 01:31:37 crc kubenswrapper[4990]: I1205 01:31:37.942625 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae0cbbb9-2bad-48e1-b113-7606e53a2c39" path="/var/lib/kubelet/pods/ae0cbbb9-2bad-48e1-b113-7606e53a2c39/volumes" Dec 05 01:31:38 crc kubenswrapper[4990]: I1205 01:31:38.775330 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3","Type":"ContainerStarted","Data":"0680ec023c6468307885df08d2ac4f1cac0cb88e5ec79fee584fd1c1afbf5efa"} Dec 05 01:31:38 crc kubenswrapper[4990]: I1205 01:31:38.775730 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3","Type":"ContainerStarted","Data":"f2ff93fb8c7c454e35b2bf406258dfcfba4639cb5f8814a7a599d53112389d47"} Dec 05 01:31:38 crc kubenswrapper[4990]: I1205 01:31:38.775750 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3","Type":"ContainerStarted","Data":"26362245e507d8670eba99034d134400bd3ec982033714c91eb70c99f5853337"} Dec 05 01:31:38 crc kubenswrapper[4990]: I1205 01:31:38.775768 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3","Type":"ContainerStarted","Data":"4fd758b87cec247eba81ade50034153cdcbbc7f7c87bfcd602d23cb9ea1e04cd"} Dec 05 01:31:39 crc kubenswrapper[4990]: I1205 01:31:39.223539 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-nbpzw" Dec 05 01:31:39 crc kubenswrapper[4990]: I1205 01:31:39.789943 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3","Type":"ContainerStarted","Data":"af96dd281dbcdb124501016399a0267209fa29bd5d56b5b3ffc4e188d564c0e5"} Dec 05 01:31:39 crc kubenswrapper[4990]: I1205 01:31:39.790357 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3","Type":"ContainerStarted","Data":"4ae06d118eb8ea46cbfca6d00f42445a070e48fa9ccd0ad829f106c51d8ef194"} Dec 05 01:31:39 crc kubenswrapper[4990]: I1205 01:31:39.790385 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3","Type":"ContainerStarted","Data":"6de70efde02107c9fbb1a95e593fa59d651e0b070d0fd80a09b5347e7584c5bb"} Dec 05 01:31:39 crc kubenswrapper[4990]: I1205 01:31:39.847157 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=21.032484893 podStartE2EDuration="34.847135444s" podCreationTimestamp="2025-12-05 01:31:05 +0000 UTC" firstStartedPulling="2025-12-05 01:31:23.773108824 +0000 UTC m=+1382.149324185" lastFinishedPulling="2025-12-05 01:31:37.587759375 +0000 UTC m=+1395.963974736" observedRunningTime="2025-12-05 01:31:39.840908247 +0000 UTC m=+1398.217123608" watchObservedRunningTime="2025-12-05 01:31:39.847135444 +0000 UTC m=+1398.223350805" Dec 05 01:31:40 crc kubenswrapper[4990]: I1205 01:31:40.154683 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-r8v6b"] Dec 05 01:31:40 crc kubenswrapper[4990]: E1205 01:31:40.155110 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae0cbbb9-2bad-48e1-b113-7606e53a2c39" containerName="ovn-config" Dec 05 01:31:40 crc kubenswrapper[4990]: I1205 01:31:40.155134 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae0cbbb9-2bad-48e1-b113-7606e53a2c39" containerName="ovn-config" Dec 05 01:31:40 crc kubenswrapper[4990]: I1205 01:31:40.155371 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae0cbbb9-2bad-48e1-b113-7606e53a2c39" containerName="ovn-config" Dec 05 01:31:40 crc kubenswrapper[4990]: I1205 01:31:40.156556 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-r8v6b" Dec 05 01:31:40 crc kubenswrapper[4990]: I1205 01:31:40.158693 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 05 01:31:40 crc kubenswrapper[4990]: I1205 01:31:40.174343 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-r8v6b"] Dec 05 01:31:40 crc kubenswrapper[4990]: I1205 01:31:40.319110 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1a8d39e-13c0-476d-801a-21d79f4b3009-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-r8v6b\" (UID: \"f1a8d39e-13c0-476d-801a-21d79f4b3009\") " pod="openstack/dnsmasq-dns-764c5664d7-r8v6b" Dec 05 01:31:40 crc kubenswrapper[4990]: I1205 01:31:40.319163 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82x4s\" (UniqueName: \"kubernetes.io/projected/f1a8d39e-13c0-476d-801a-21d79f4b3009-kube-api-access-82x4s\") pod \"dnsmasq-dns-764c5664d7-r8v6b\" (UID: \"f1a8d39e-13c0-476d-801a-21d79f4b3009\") " pod="openstack/dnsmasq-dns-764c5664d7-r8v6b" Dec 05 01:31:40 crc kubenswrapper[4990]: I1205 01:31:40.319228 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1a8d39e-13c0-476d-801a-21d79f4b3009-config\") pod \"dnsmasq-dns-764c5664d7-r8v6b\" (UID: \"f1a8d39e-13c0-476d-801a-21d79f4b3009\") " pod="openstack/dnsmasq-dns-764c5664d7-r8v6b" Dec 05 01:31:40 crc kubenswrapper[4990]: I1205 01:31:40.319311 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1a8d39e-13c0-476d-801a-21d79f4b3009-dns-svc\") pod \"dnsmasq-dns-764c5664d7-r8v6b\" (UID: \"f1a8d39e-13c0-476d-801a-21d79f4b3009\") " pod="openstack/dnsmasq-dns-764c5664d7-r8v6b" Dec 05 01:31:40 crc kubenswrapper[4990]: I1205 01:31:40.319350 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1a8d39e-13c0-476d-801a-21d79f4b3009-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-r8v6b\" (UID: \"f1a8d39e-13c0-476d-801a-21d79f4b3009\") " pod="openstack/dnsmasq-dns-764c5664d7-r8v6b" Dec 05 01:31:40 crc kubenswrapper[4990]: I1205 01:31:40.319378 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1a8d39e-13c0-476d-801a-21d79f4b3009-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-r8v6b\" (UID: \"f1a8d39e-13c0-476d-801a-21d79f4b3009\") " pod="openstack/dnsmasq-dns-764c5664d7-r8v6b" Dec 05 01:31:40 crc kubenswrapper[4990]: I1205 01:31:40.421026 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1a8d39e-13c0-476d-801a-21d79f4b3009-dns-svc\") pod \"dnsmasq-dns-764c5664d7-r8v6b\" (UID: \"f1a8d39e-13c0-476d-801a-21d79f4b3009\") " pod="openstack/dnsmasq-dns-764c5664d7-r8v6b" Dec 05 01:31:40 crc kubenswrapper[4990]: I1205 01:31:40.421080 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1a8d39e-13c0-476d-801a-21d79f4b3009-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-r8v6b\" (UID: \"f1a8d39e-13c0-476d-801a-21d79f4b3009\") " pod="openstack/dnsmasq-dns-764c5664d7-r8v6b" Dec 05 01:31:40 crc kubenswrapper[4990]: I1205 01:31:40.421105 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1a8d39e-13c0-476d-801a-21d79f4b3009-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-r8v6b\" (UID: \"f1a8d39e-13c0-476d-801a-21d79f4b3009\") " pod="openstack/dnsmasq-dns-764c5664d7-r8v6b" Dec 05 01:31:40 crc kubenswrapper[4990]: I1205 01:31:40.421159 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1a8d39e-13c0-476d-801a-21d79f4b3009-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-r8v6b\" (UID: \"f1a8d39e-13c0-476d-801a-21d79f4b3009\") " pod="openstack/dnsmasq-dns-764c5664d7-r8v6b" Dec 05 01:31:40 crc kubenswrapper[4990]: I1205 01:31:40.421177 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82x4s\" (UniqueName: \"kubernetes.io/projected/f1a8d39e-13c0-476d-801a-21d79f4b3009-kube-api-access-82x4s\") pod \"dnsmasq-dns-764c5664d7-r8v6b\" (UID: \"f1a8d39e-13c0-476d-801a-21d79f4b3009\") " pod="openstack/dnsmasq-dns-764c5664d7-r8v6b" Dec 05 01:31:40 crc kubenswrapper[4990]: I1205 01:31:40.421204 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1a8d39e-13c0-476d-801a-21d79f4b3009-config\") pod \"dnsmasq-dns-764c5664d7-r8v6b\" (UID: \"f1a8d39e-13c0-476d-801a-21d79f4b3009\") " pod="openstack/dnsmasq-dns-764c5664d7-r8v6b" Dec 05 01:31:40 crc kubenswrapper[4990]: I1205 01:31:40.422090 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1a8d39e-13c0-476d-801a-21d79f4b3009-config\") pod \"dnsmasq-dns-764c5664d7-r8v6b\" (UID: \"f1a8d39e-13c0-476d-801a-21d79f4b3009\") " pod="openstack/dnsmasq-dns-764c5664d7-r8v6b" Dec 05 01:31:40 crc kubenswrapper[4990]: I1205 01:31:40.422231 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1a8d39e-13c0-476d-801a-21d79f4b3009-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-r8v6b\" (UID: \"f1a8d39e-13c0-476d-801a-21d79f4b3009\") " pod="openstack/dnsmasq-dns-764c5664d7-r8v6b" Dec 05 01:31:40 crc kubenswrapper[4990]: I1205 01:31:40.422337 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1a8d39e-13c0-476d-801a-21d79f4b3009-dns-svc\") pod \"dnsmasq-dns-764c5664d7-r8v6b\" (UID: \"f1a8d39e-13c0-476d-801a-21d79f4b3009\") " pod="openstack/dnsmasq-dns-764c5664d7-r8v6b" Dec 05 01:31:40 crc kubenswrapper[4990]: I1205 01:31:40.422555 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1a8d39e-13c0-476d-801a-21d79f4b3009-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-r8v6b\" (UID: \"f1a8d39e-13c0-476d-801a-21d79f4b3009\") " pod="openstack/dnsmasq-dns-764c5664d7-r8v6b" Dec 05 01:31:40 crc kubenswrapper[4990]: I1205 01:31:40.422666 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1a8d39e-13c0-476d-801a-21d79f4b3009-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-r8v6b\" (UID: \"f1a8d39e-13c0-476d-801a-21d79f4b3009\") " pod="openstack/dnsmasq-dns-764c5664d7-r8v6b" Dec 05 01:31:40 crc kubenswrapper[4990]: I1205 01:31:40.439336 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82x4s\" (UniqueName: \"kubernetes.io/projected/f1a8d39e-13c0-476d-801a-21d79f4b3009-kube-api-access-82x4s\") pod \"dnsmasq-dns-764c5664d7-r8v6b\" (UID: \"f1a8d39e-13c0-476d-801a-21d79f4b3009\") " pod="openstack/dnsmasq-dns-764c5664d7-r8v6b" Dec 05 01:31:40 crc kubenswrapper[4990]: I1205 01:31:40.474912 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-r8v6b" Dec 05 01:31:40 crc kubenswrapper[4990]: I1205 01:31:40.919796 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-r8v6b"] Dec 05 01:31:40 crc kubenswrapper[4990]: W1205 01:31:40.928331 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1a8d39e_13c0_476d_801a_21d79f4b3009.slice/crio-1ef1678fa28f2557ef84a6a74a590b8903e92e86bbb4cbb0fe10d93df69b8233 WatchSource:0}: Error finding container 1ef1678fa28f2557ef84a6a74a590b8903e92e86bbb4cbb0fe10d93df69b8233: Status 404 returned error can't find the container with id 1ef1678fa28f2557ef84a6a74a590b8903e92e86bbb4cbb0fe10d93df69b8233 Dec 05 01:31:40 crc kubenswrapper[4990]: I1205 01:31:40.966920 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:31:41 crc kubenswrapper[4990]: I1205 01:31:41.102453 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 05 01:31:41 crc kubenswrapper[4990]: I1205 01:31:41.807950 4990 generic.go:334] "Generic (PLEG): container finished" podID="f1a8d39e-13c0-476d-801a-21d79f4b3009" containerID="9173c51ad2edcf89d697431f605057c5fd46290698aa23de4087798cbe3ccfe7" exitCode=0 Dec 05 01:31:41 crc kubenswrapper[4990]: I1205 01:31:41.808005 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-r8v6b" event={"ID":"f1a8d39e-13c0-476d-801a-21d79f4b3009","Type":"ContainerDied","Data":"9173c51ad2edcf89d697431f605057c5fd46290698aa23de4087798cbe3ccfe7"} Dec 05 01:31:41 crc kubenswrapper[4990]: I1205 01:31:41.808408 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-r8v6b" event={"ID":"f1a8d39e-13c0-476d-801a-21d79f4b3009","Type":"ContainerStarted","Data":"1ef1678fa28f2557ef84a6a74a590b8903e92e86bbb4cbb0fe10d93df69b8233"} Dec 05 01:31:42 crc kubenswrapper[4990]: I1205 01:31:42.812016 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-rm8cx"] Dec 05 01:31:42 crc kubenswrapper[4990]: I1205 01:31:42.813207 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rm8cx" Dec 05 01:31:42 crc kubenswrapper[4990]: I1205 01:31:42.816704 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-r8v6b" event={"ID":"f1a8d39e-13c0-476d-801a-21d79f4b3009","Type":"ContainerStarted","Data":"debdff4e77cd7ef088d5b1c07f0dce5c7871cfdf001e627a8484c99f302671d2"} Dec 05 01:31:42 crc kubenswrapper[4990]: I1205 01:31:42.816968 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-r8v6b" Dec 05 01:31:42 crc kubenswrapper[4990]: I1205 01:31:42.822913 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-rm8cx"] Dec 05 01:31:42 crc kubenswrapper[4990]: I1205 01:31:42.878431 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhgvc\" (UniqueName: \"kubernetes.io/projected/42546ba1-6f6e-437c-90f1-53368b287b1a-kube-api-access-jhgvc\") pod \"cinder-db-create-rm8cx\" (UID: \"42546ba1-6f6e-437c-90f1-53368b287b1a\") " pod="openstack/cinder-db-create-rm8cx" Dec 05 01:31:42 crc kubenswrapper[4990]: I1205 01:31:42.878515 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42546ba1-6f6e-437c-90f1-53368b287b1a-operator-scripts\") pod \"cinder-db-create-rm8cx\" (UID: \"42546ba1-6f6e-437c-90f1-53368b287b1a\") " pod="openstack/cinder-db-create-rm8cx" Dec 05 01:31:42 crc kubenswrapper[4990]: I1205 01:31:42.902908 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-r8v6b" podStartSLOduration=2.902892059 podStartE2EDuration="2.902892059s" podCreationTimestamp="2025-12-05 01:31:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:31:42.852103608 +0000 UTC m=+1401.228318979" watchObservedRunningTime="2025-12-05 01:31:42.902892059 +0000 UTC m=+1401.279107420" Dec 05 01:31:42 crc kubenswrapper[4990]: I1205 01:31:42.906496 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-phsw5"] Dec 05 01:31:42 crc kubenswrapper[4990]: I1205 01:31:42.907705 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-phsw5" Dec 05 01:31:42 crc kubenswrapper[4990]: I1205 01:31:42.929514 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-phsw5"] Dec 05 01:31:42 crc kubenswrapper[4990]: I1205 01:31:42.936170 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-a4a4-account-create-update-rvgtj"] Dec 05 01:31:42 crc kubenswrapper[4990]: I1205 01:31:42.937109 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a4a4-account-create-update-rvgtj" Dec 05 01:31:42 crc kubenswrapper[4990]: I1205 01:31:42.940961 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 05 01:31:42 crc kubenswrapper[4990]: I1205 01:31:42.981535 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpjch\" (UniqueName: \"kubernetes.io/projected/187debb7-c09c-43ee-b6bf-263a1df1d4e0-kube-api-access-xpjch\") pod \"barbican-db-create-phsw5\" (UID: \"187debb7-c09c-43ee-b6bf-263a1df1d4e0\") " pod="openstack/barbican-db-create-phsw5" Dec 05 01:31:42 crc kubenswrapper[4990]: I1205 01:31:42.981594 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhgvc\" (UniqueName: \"kubernetes.io/projected/42546ba1-6f6e-437c-90f1-53368b287b1a-kube-api-access-jhgvc\") pod \"cinder-db-create-rm8cx\" (UID: \"42546ba1-6f6e-437c-90f1-53368b287b1a\") " pod="openstack/cinder-db-create-rm8cx" Dec 05 01:31:42 crc kubenswrapper[4990]: I1205 01:31:42.981635 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42546ba1-6f6e-437c-90f1-53368b287b1a-operator-scripts\") pod \"cinder-db-create-rm8cx\" (UID: \"42546ba1-6f6e-437c-90f1-53368b287b1a\") " pod="openstack/cinder-db-create-rm8cx" Dec 05 01:31:42 crc kubenswrapper[4990]: I1205 01:31:42.981703 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/187debb7-c09c-43ee-b6bf-263a1df1d4e0-operator-scripts\") pod \"barbican-db-create-phsw5\" (UID: \"187debb7-c09c-43ee-b6bf-263a1df1d4e0\") " pod="openstack/barbican-db-create-phsw5" Dec 05 01:31:42 crc kubenswrapper[4990]: I1205 01:31:42.982474 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42546ba1-6f6e-437c-90f1-53368b287b1a-operator-scripts\") pod \"cinder-db-create-rm8cx\" (UID: \"42546ba1-6f6e-437c-90f1-53368b287b1a\") " pod="openstack/cinder-db-create-rm8cx" Dec 05 01:31:42 crc kubenswrapper[4990]: I1205 01:31:42.986426 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-a4a4-account-create-update-rvgtj"] Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.050314 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhgvc\" (UniqueName: \"kubernetes.io/projected/42546ba1-6f6e-437c-90f1-53368b287b1a-kube-api-access-jhgvc\") pod \"cinder-db-create-rm8cx\" (UID: \"42546ba1-6f6e-437c-90f1-53368b287b1a\") " pod="openstack/cinder-db-create-rm8cx" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.083893 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/187debb7-c09c-43ee-b6bf-263a1df1d4e0-operator-scripts\") pod \"barbican-db-create-phsw5\" (UID: \"187debb7-c09c-43ee-b6bf-263a1df1d4e0\") " pod="openstack/barbican-db-create-phsw5" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.083954 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpjch\" (UniqueName: \"kubernetes.io/projected/187debb7-c09c-43ee-b6bf-263a1df1d4e0-kube-api-access-xpjch\") pod \"barbican-db-create-phsw5\" (UID: \"187debb7-c09c-43ee-b6bf-263a1df1d4e0\") " pod="openstack/barbican-db-create-phsw5" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.083997 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-dd52-account-create-update-gk7b2"] Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.084629 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/187debb7-c09c-43ee-b6bf-263a1df1d4e0-operator-scripts\") pod \"barbican-db-create-phsw5\" (UID: \"187debb7-c09c-43ee-b6bf-263a1df1d4e0\") " pod="openstack/barbican-db-create-phsw5" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.085091 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-dd52-account-create-update-gk7b2" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.084009 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpxmn\" (UniqueName: \"kubernetes.io/projected/504df873-8902-4568-b465-40d75b755fee-kube-api-access-mpxmn\") pod \"cinder-a4a4-account-create-update-rvgtj\" (UID: \"504df873-8902-4568-b465-40d75b755fee\") " pod="openstack/cinder-a4a4-account-create-update-rvgtj" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.085608 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/504df873-8902-4568-b465-40d75b755fee-operator-scripts\") pod \"cinder-a4a4-account-create-update-rvgtj\" (UID: \"504df873-8902-4568-b465-40d75b755fee\") " pod="openstack/cinder-a4a4-account-create-update-rvgtj" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.087945 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.110762 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-dd52-account-create-update-gk7b2"] Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.138714 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rm8cx" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.144332 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpjch\" (UniqueName: \"kubernetes.io/projected/187debb7-c09c-43ee-b6bf-263a1df1d4e0-kube-api-access-xpjch\") pod \"barbican-db-create-phsw5\" (UID: \"187debb7-c09c-43ee-b6bf-263a1df1d4e0\") " pod="openstack/barbican-db-create-phsw5" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.185041 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-vm2sd"] Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.186240 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vm2sd" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.187391 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/504df873-8902-4568-b465-40d75b755fee-operator-scripts\") pod \"cinder-a4a4-account-create-update-rvgtj\" (UID: \"504df873-8902-4568-b465-40d75b755fee\") " pod="openstack/cinder-a4a4-account-create-update-rvgtj" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.187531 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5-operator-scripts\") pod \"barbican-dd52-account-create-update-gk7b2\" (UID: \"3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5\") " pod="openstack/barbican-dd52-account-create-update-gk7b2" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.187631 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpxmn\" (UniqueName: \"kubernetes.io/projected/504df873-8902-4568-b465-40d75b755fee-kube-api-access-mpxmn\") pod \"cinder-a4a4-account-create-update-rvgtj\" (UID: \"504df873-8902-4568-b465-40d75b755fee\") " pod="openstack/cinder-a4a4-account-create-update-rvgtj" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.187661 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl6r7\" (UniqueName: \"kubernetes.io/projected/3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5-kube-api-access-nl6r7\") pod \"barbican-dd52-account-create-update-gk7b2\" (UID: \"3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5\") " pod="openstack/barbican-dd52-account-create-update-gk7b2" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.188425 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/504df873-8902-4568-b465-40d75b755fee-operator-scripts\") pod \"cinder-a4a4-account-create-update-rvgtj\" (UID: \"504df873-8902-4568-b465-40d75b755fee\") " pod="openstack/cinder-a4a4-account-create-update-rvgtj" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.198909 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.199084 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.199201 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.199635 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2q9vq" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.202919 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-vm2sd"] Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.224318 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-phsw5" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.290029 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpxmn\" (UniqueName: \"kubernetes.io/projected/504df873-8902-4568-b465-40d75b755fee-kube-api-access-mpxmn\") pod \"cinder-a4a4-account-create-update-rvgtj\" (UID: \"504df873-8902-4568-b465-40d75b755fee\") " pod="openstack/cinder-a4a4-account-create-update-rvgtj" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.292124 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5-operator-scripts\") pod \"barbican-dd52-account-create-update-gk7b2\" (UID: \"3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5\") " pod="openstack/barbican-dd52-account-create-update-gk7b2" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.292166 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93-config-data\") pod \"keystone-db-sync-vm2sd\" (UID: \"3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93\") " pod="openstack/keystone-db-sync-vm2sd" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.292221 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rlsx\" (UniqueName: \"kubernetes.io/projected/3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93-kube-api-access-5rlsx\") pod \"keystone-db-sync-vm2sd\" (UID: \"3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93\") " pod="openstack/keystone-db-sync-vm2sd" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.292242 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl6r7\" (UniqueName: \"kubernetes.io/projected/3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5-kube-api-access-nl6r7\") pod \"barbican-dd52-account-create-update-gk7b2\" (UID: \"3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5\") " pod="openstack/barbican-dd52-account-create-update-gk7b2" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.292291 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93-combined-ca-bundle\") pod \"keystone-db-sync-vm2sd\" (UID: \"3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93\") " pod="openstack/keystone-db-sync-vm2sd" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.292836 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5-operator-scripts\") pod \"barbican-dd52-account-create-update-gk7b2\" (UID: \"3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5\") " pod="openstack/barbican-dd52-account-create-update-gk7b2" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.304923 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c36c-account-create-update-qwx75"] Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.305931 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c36c-account-create-update-qwx75" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.310936 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.331372 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c36c-account-create-update-qwx75"] Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.331425 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl6r7\" (UniqueName: \"kubernetes.io/projected/3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5-kube-api-access-nl6r7\") pod \"barbican-dd52-account-create-update-gk7b2\" (UID: \"3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5\") " pod="openstack/barbican-dd52-account-create-update-gk7b2" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.342589 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-b4mqj"] Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.343588 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-b4mqj" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.352836 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-b4mqj"] Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.393814 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4vz4\" (UniqueName: \"kubernetes.io/projected/0e741872-f2ad-40b0-9447-797f97e11c82-kube-api-access-v4vz4\") pod \"neutron-c36c-account-create-update-qwx75\" (UID: \"0e741872-f2ad-40b0-9447-797f97e11c82\") " pod="openstack/neutron-c36c-account-create-update-qwx75" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.394006 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rlsx\" (UniqueName: \"kubernetes.io/projected/3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93-kube-api-access-5rlsx\") pod \"keystone-db-sync-vm2sd\" (UID: \"3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93\") " pod="openstack/keystone-db-sync-vm2sd" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.394066 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b-operator-scripts\") pod \"neutron-db-create-b4mqj\" (UID: \"c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b\") " pod="openstack/neutron-db-create-b4mqj" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.394162 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93-combined-ca-bundle\") pod \"keystone-db-sync-vm2sd\" (UID: \"3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93\") " pod="openstack/keystone-db-sync-vm2sd" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.394187 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e741872-f2ad-40b0-9447-797f97e11c82-operator-scripts\") pod \"neutron-c36c-account-create-update-qwx75\" (UID: \"0e741872-f2ad-40b0-9447-797f97e11c82\") " pod="openstack/neutron-c36c-account-create-update-qwx75" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.394216 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd7w7\" (UniqueName: \"kubernetes.io/projected/c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b-kube-api-access-wd7w7\") pod \"neutron-db-create-b4mqj\" (UID: \"c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b\") " pod="openstack/neutron-db-create-b4mqj" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.394247 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93-config-data\") pod \"keystone-db-sync-vm2sd\" (UID: \"3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93\") " pod="openstack/keystone-db-sync-vm2sd" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.398824 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93-combined-ca-bundle\") pod \"keystone-db-sync-vm2sd\" (UID: \"3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93\") " pod="openstack/keystone-db-sync-vm2sd" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.399823 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93-config-data\") pod \"keystone-db-sync-vm2sd\" (UID: \"3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93\") " pod="openstack/keystone-db-sync-vm2sd" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.408468 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-dd52-account-create-update-gk7b2" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.428592 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rlsx\" (UniqueName: \"kubernetes.io/projected/3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93-kube-api-access-5rlsx\") pod \"keystone-db-sync-vm2sd\" (UID: \"3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93\") " pod="openstack/keystone-db-sync-vm2sd" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.495959 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e741872-f2ad-40b0-9447-797f97e11c82-operator-scripts\") pod \"neutron-c36c-account-create-update-qwx75\" (UID: \"0e741872-f2ad-40b0-9447-797f97e11c82\") " pod="openstack/neutron-c36c-account-create-update-qwx75" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.496015 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd7w7\" (UniqueName: \"kubernetes.io/projected/c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b-kube-api-access-wd7w7\") pod \"neutron-db-create-b4mqj\" (UID: \"c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b\") " pod="openstack/neutron-db-create-b4mqj" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.496062 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4vz4\" (UniqueName: \"kubernetes.io/projected/0e741872-f2ad-40b0-9447-797f97e11c82-kube-api-access-v4vz4\") pod \"neutron-c36c-account-create-update-qwx75\" (UID: \"0e741872-f2ad-40b0-9447-797f97e11c82\") " pod="openstack/neutron-c36c-account-create-update-qwx75" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.496118 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b-operator-scripts\") pod \"neutron-db-create-b4mqj\" (UID: \"c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b\") " pod="openstack/neutron-db-create-b4mqj" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.496801 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b-operator-scripts\") pod \"neutron-db-create-b4mqj\" (UID: \"c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b\") " pod="openstack/neutron-db-create-b4mqj" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.497300 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e741872-f2ad-40b0-9447-797f97e11c82-operator-scripts\") pod \"neutron-c36c-account-create-update-qwx75\" (UID: \"0e741872-f2ad-40b0-9447-797f97e11c82\") " pod="openstack/neutron-c36c-account-create-update-qwx75" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.517989 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd7w7\" (UniqueName: \"kubernetes.io/projected/c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b-kube-api-access-wd7w7\") pod \"neutron-db-create-b4mqj\" (UID: \"c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b\") " pod="openstack/neutron-db-create-b4mqj" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.518397 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4vz4\" (UniqueName: \"kubernetes.io/projected/0e741872-f2ad-40b0-9447-797f97e11c82-kube-api-access-v4vz4\") pod \"neutron-c36c-account-create-update-qwx75\" (UID: \"0e741872-f2ad-40b0-9447-797f97e11c82\") " pod="openstack/neutron-c36c-account-create-update-qwx75" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.559433 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a4a4-account-create-update-rvgtj" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.618734 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vm2sd" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.641948 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c36c-account-create-update-qwx75" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.667573 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-b4mqj" Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.799153 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-rm8cx"] Dec 05 01:31:43 crc kubenswrapper[4990]: I1205 01:31:43.908452 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-phsw5"] Dec 05 01:31:43 crc kubenswrapper[4990]: W1205 01:31:43.909369 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod187debb7_c09c_43ee_b6bf_263a1df1d4e0.slice/crio-0d9bc1bd6af55593766e621b5f32353aa7926d02ef24297ecaa4b27377b224ff WatchSource:0}: Error finding container 0d9bc1bd6af55593766e621b5f32353aa7926d02ef24297ecaa4b27377b224ff: Status 404 returned error can't find the container with id 0d9bc1bd6af55593766e621b5f32353aa7926d02ef24297ecaa4b27377b224ff Dec 05 01:31:44 crc kubenswrapper[4990]: I1205 01:31:44.037998 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-dd52-account-create-update-gk7b2"] Dec 05 01:31:44 crc kubenswrapper[4990]: I1205 01:31:44.193955 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-a4a4-account-create-update-rvgtj"] Dec 05 01:31:44 crc kubenswrapper[4990]: W1205 01:31:44.200771 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod504df873_8902_4568_b465_40d75b755fee.slice/crio-b1f44de2563e98eedb382b8a5b39be6c597d83794d4e210b8e1a07ee72f63a43 WatchSource:0}: Error finding container b1f44de2563e98eedb382b8a5b39be6c597d83794d4e210b8e1a07ee72f63a43: Status 404 returned error can't find the container with id b1f44de2563e98eedb382b8a5b39be6c597d83794d4e210b8e1a07ee72f63a43 Dec 05 01:31:44 crc kubenswrapper[4990]: I1205 01:31:44.202864 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-vm2sd"] Dec 05 01:31:44 crc kubenswrapper[4990]: W1205 01:31:44.214314 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3527aa1f_5b73_4fc1_b22e_0b6ee83a0e93.slice/crio-e1cf77f7b199fe05b887e674788cedc411884e73403f29cbfd5b87f770de98a0 WatchSource:0}: Error finding container e1cf77f7b199fe05b887e674788cedc411884e73403f29cbfd5b87f770de98a0: Status 404 returned error can't find the container with id e1cf77f7b199fe05b887e674788cedc411884e73403f29cbfd5b87f770de98a0 Dec 05 01:31:44 crc kubenswrapper[4990]: I1205 01:31:44.311712 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-b4mqj"] Dec 05 01:31:44 crc kubenswrapper[4990]: I1205 01:31:44.332535 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c36c-account-create-update-qwx75"] Dec 05 01:31:44 crc kubenswrapper[4990]: I1205 01:31:44.846060 4990 generic.go:334] "Generic (PLEG): container finished" podID="42546ba1-6f6e-437c-90f1-53368b287b1a" containerID="fdd9caffdc54e14f6ca95d0eaa8abd10e382660103a959bd1a6e1e4de288814f" exitCode=0 Dec 05 01:31:44 crc kubenswrapper[4990]: I1205 01:31:44.846156 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rm8cx" event={"ID":"42546ba1-6f6e-437c-90f1-53368b287b1a","Type":"ContainerDied","Data":"fdd9caffdc54e14f6ca95d0eaa8abd10e382660103a959bd1a6e1e4de288814f"} Dec 05 01:31:44 crc kubenswrapper[4990]: I1205 01:31:44.846419 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rm8cx" event={"ID":"42546ba1-6f6e-437c-90f1-53368b287b1a","Type":"ContainerStarted","Data":"0650b25ef9869ee5cdb3b20157486659f4c526aec35b9e5eb2ca217107799bf4"} Dec 05 01:31:44 crc kubenswrapper[4990]: I1205 01:31:44.848004 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c36c-account-create-update-qwx75" event={"ID":"0e741872-f2ad-40b0-9447-797f97e11c82","Type":"ContainerStarted","Data":"f51a457180e6f16972fd89d552eb4c0e31055f62c38ccb33930acf53f64f2815"} Dec 05 01:31:44 crc kubenswrapper[4990]: I1205 01:31:44.848033 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c36c-account-create-update-qwx75" event={"ID":"0e741872-f2ad-40b0-9447-797f97e11c82","Type":"ContainerStarted","Data":"9a7cd98391ea82b3fe8b7089812bf99e94d45ed2fb29e8ccbb7bc4fba28f92b5"} Dec 05 01:31:44 crc kubenswrapper[4990]: I1205 01:31:44.870110 4990 generic.go:334] "Generic (PLEG): container finished" podID="187debb7-c09c-43ee-b6bf-263a1df1d4e0" containerID="959c98d753d2f71fdcd4c37382a282b3f4af36529378c1080b16d158907fa142" exitCode=0 Dec 05 01:31:44 crc kubenswrapper[4990]: I1205 01:31:44.870294 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-phsw5" event={"ID":"187debb7-c09c-43ee-b6bf-263a1df1d4e0","Type":"ContainerDied","Data":"959c98d753d2f71fdcd4c37382a282b3f4af36529378c1080b16d158907fa142"} Dec 05 01:31:44 crc kubenswrapper[4990]: I1205 01:31:44.870319 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-phsw5" event={"ID":"187debb7-c09c-43ee-b6bf-263a1df1d4e0","Type":"ContainerStarted","Data":"0d9bc1bd6af55593766e621b5f32353aa7926d02ef24297ecaa4b27377b224ff"} Dec 05 01:31:44 crc kubenswrapper[4990]: I1205 01:31:44.879063 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vm2sd" event={"ID":"3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93","Type":"ContainerStarted","Data":"e1cf77f7b199fe05b887e674788cedc411884e73403f29cbfd5b87f770de98a0"} Dec 05 01:31:44 crc kubenswrapper[4990]: I1205 01:31:44.880101 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-c36c-account-create-update-qwx75" podStartSLOduration=1.88008612 podStartE2EDuration="1.88008612s" podCreationTimestamp="2025-12-05 01:31:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:31:44.877855166 +0000 UTC m=+1403.254070527" watchObservedRunningTime="2025-12-05 01:31:44.88008612 +0000 UTC m=+1403.256301491" Dec 05 01:31:44 crc kubenswrapper[4990]: I1205 01:31:44.887223 4990 generic.go:334] "Generic (PLEG): container finished" podID="c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b" containerID="c24f96e022dad61d2024dca7580d0d23a60db66f66c1ebe8fdb46bbc6050c570" exitCode=0 Dec 05 01:31:44 crc kubenswrapper[4990]: I1205 01:31:44.887362 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-b4mqj" event={"ID":"c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b","Type":"ContainerDied","Data":"c24f96e022dad61d2024dca7580d0d23a60db66f66c1ebe8fdb46bbc6050c570"} Dec 05 01:31:44 crc kubenswrapper[4990]: I1205 01:31:44.887409 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-b4mqj" event={"ID":"c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b","Type":"ContainerStarted","Data":"1439a8ef999bfbf4ddd703302a0df613c47b7286eda81bdad7630fd5a5feb5cf"} Dec 05 01:31:44 crc kubenswrapper[4990]: I1205 01:31:44.891191 4990 generic.go:334] "Generic (PLEG): container finished" podID="3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5" containerID="1301884c7cfb4fdd1cf59676cfbbdb5767dae769ecedc426439c67dfdc545613" exitCode=0 Dec 05 01:31:44 crc kubenswrapper[4990]: I1205 01:31:44.891380 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-dd52-account-create-update-gk7b2" event={"ID":"3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5","Type":"ContainerDied","Data":"1301884c7cfb4fdd1cf59676cfbbdb5767dae769ecedc426439c67dfdc545613"} Dec 05 01:31:44 crc kubenswrapper[4990]: I1205 01:31:44.891407 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-dd52-account-create-update-gk7b2" event={"ID":"3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5","Type":"ContainerStarted","Data":"36a58e9d971e616ec06b09600add3d5843e9a3c6db7295025032097e8abf2ecf"} Dec 05 01:31:44 crc kubenswrapper[4990]: I1205 01:31:44.895405 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a4a4-account-create-update-rvgtj" event={"ID":"504df873-8902-4568-b465-40d75b755fee","Type":"ContainerStarted","Data":"44c22461d8201321942c5ec130ec3a61ad61285b0502ddaeb081925c0921d588"} Dec 05 01:31:44 crc kubenswrapper[4990]: I1205 01:31:44.895448 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a4a4-account-create-update-rvgtj" event={"ID":"504df873-8902-4568-b465-40d75b755fee","Type":"ContainerStarted","Data":"b1f44de2563e98eedb382b8a5b39be6c597d83794d4e210b8e1a07ee72f63a43"} Dec 05 01:31:45 crc kubenswrapper[4990]: I1205 01:31:45.921813 4990 generic.go:334] "Generic (PLEG): container finished" podID="0e741872-f2ad-40b0-9447-797f97e11c82" containerID="f51a457180e6f16972fd89d552eb4c0e31055f62c38ccb33930acf53f64f2815" exitCode=0 Dec 05 01:31:45 crc kubenswrapper[4990]: I1205 01:31:45.921970 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c36c-account-create-update-qwx75" event={"ID":"0e741872-f2ad-40b0-9447-797f97e11c82","Type":"ContainerDied","Data":"f51a457180e6f16972fd89d552eb4c0e31055f62c38ccb33930acf53f64f2815"} Dec 05 01:31:45 crc kubenswrapper[4990]: I1205 01:31:45.946045 4990 generic.go:334] "Generic (PLEG): container finished" podID="504df873-8902-4568-b465-40d75b755fee" containerID="44c22461d8201321942c5ec130ec3a61ad61285b0502ddaeb081925c0921d588" exitCode=0 Dec 05 01:31:45 crc kubenswrapper[4990]: I1205 01:31:45.954309 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a4a4-account-create-update-rvgtj" event={"ID":"504df873-8902-4568-b465-40d75b755fee","Type":"ContainerDied","Data":"44c22461d8201321942c5ec130ec3a61ad61285b0502ddaeb081925c0921d588"} Dec 05 01:31:48 crc kubenswrapper[4990]: I1205 01:31:48.989603 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-phsw5" event={"ID":"187debb7-c09c-43ee-b6bf-263a1df1d4e0","Type":"ContainerDied","Data":"0d9bc1bd6af55593766e621b5f32353aa7926d02ef24297ecaa4b27377b224ff"} Dec 05 01:31:48 crc kubenswrapper[4990]: I1205 01:31:48.990542 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d9bc1bd6af55593766e621b5f32353aa7926d02ef24297ecaa4b27377b224ff" Dec 05 01:31:48 crc kubenswrapper[4990]: I1205 01:31:48.995173 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-b4mqj" event={"ID":"c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b","Type":"ContainerDied","Data":"1439a8ef999bfbf4ddd703302a0df613c47b7286eda81bdad7630fd5a5feb5cf"} Dec 05 01:31:48 crc kubenswrapper[4990]: I1205 01:31:48.995216 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1439a8ef999bfbf4ddd703302a0df613c47b7286eda81bdad7630fd5a5feb5cf" Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.003546 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-dd52-account-create-update-gk7b2" event={"ID":"3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5","Type":"ContainerDied","Data":"36a58e9d971e616ec06b09600add3d5843e9a3c6db7295025032097e8abf2ecf"} Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.003620 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36a58e9d971e616ec06b09600add3d5843e9a3c6db7295025032097e8abf2ecf" Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.007046 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a4a4-account-create-update-rvgtj" event={"ID":"504df873-8902-4568-b465-40d75b755fee","Type":"ContainerDied","Data":"b1f44de2563e98eedb382b8a5b39be6c597d83794d4e210b8e1a07ee72f63a43"} Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.007103 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1f44de2563e98eedb382b8a5b39be6c597d83794d4e210b8e1a07ee72f63a43" Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.009173 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rm8cx" event={"ID":"42546ba1-6f6e-437c-90f1-53368b287b1a","Type":"ContainerDied","Data":"0650b25ef9869ee5cdb3b20157486659f4c526aec35b9e5eb2ca217107799bf4"} Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.009211 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0650b25ef9869ee5cdb3b20157486659f4c526aec35b9e5eb2ca217107799bf4" Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.011074 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c36c-account-create-update-qwx75" event={"ID":"0e741872-f2ad-40b0-9447-797f97e11c82","Type":"ContainerDied","Data":"9a7cd98391ea82b3fe8b7089812bf99e94d45ed2fb29e8ccbb7bc4fba28f92b5"} Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.011115 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a7cd98391ea82b3fe8b7089812bf99e94d45ed2fb29e8ccbb7bc4fba28f92b5" Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.142856 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a4a4-account-create-update-rvgtj" Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.170069 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-phsw5" Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.199956 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-b4mqj" Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.200441 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c36c-account-create-update-qwx75" Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.217013 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpxmn\" (UniqueName: \"kubernetes.io/projected/504df873-8902-4568-b465-40d75b755fee-kube-api-access-mpxmn\") pod \"504df873-8902-4568-b465-40d75b755fee\" (UID: \"504df873-8902-4568-b465-40d75b755fee\") " Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.217381 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/504df873-8902-4568-b465-40d75b755fee-operator-scripts\") pod \"504df873-8902-4568-b465-40d75b755fee\" (UID: \"504df873-8902-4568-b465-40d75b755fee\") " Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.223911 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/504df873-8902-4568-b465-40d75b755fee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "504df873-8902-4568-b465-40d75b755fee" (UID: "504df873-8902-4568-b465-40d75b755fee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.237405 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/504df873-8902-4568-b465-40d75b755fee-kube-api-access-mpxmn" (OuterVolumeSpecName: "kube-api-access-mpxmn") pod "504df873-8902-4568-b465-40d75b755fee" (UID: "504df873-8902-4568-b465-40d75b755fee"). InnerVolumeSpecName "kube-api-access-mpxmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.238897 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rm8cx" Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.240696 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-dd52-account-create-update-gk7b2" Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.319419 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhgvc\" (UniqueName: \"kubernetes.io/projected/42546ba1-6f6e-437c-90f1-53368b287b1a-kube-api-access-jhgvc\") pod \"42546ba1-6f6e-437c-90f1-53368b287b1a\" (UID: \"42546ba1-6f6e-437c-90f1-53368b287b1a\") " Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.319463 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/187debb7-c09c-43ee-b6bf-263a1df1d4e0-operator-scripts\") pod \"187debb7-c09c-43ee-b6bf-263a1df1d4e0\" (UID: \"187debb7-c09c-43ee-b6bf-263a1df1d4e0\") " Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.319604 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpjch\" (UniqueName: \"kubernetes.io/projected/187debb7-c09c-43ee-b6bf-263a1df1d4e0-kube-api-access-xpjch\") pod \"187debb7-c09c-43ee-b6bf-263a1df1d4e0\" (UID: \"187debb7-c09c-43ee-b6bf-263a1df1d4e0\") " Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.319631 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42546ba1-6f6e-437c-90f1-53368b287b1a-operator-scripts\") pod \"42546ba1-6f6e-437c-90f1-53368b287b1a\" (UID: \"42546ba1-6f6e-437c-90f1-53368b287b1a\") " Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.319669 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5-operator-scripts\") pod \"3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5\" (UID: \"3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5\") " Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.319710 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e741872-f2ad-40b0-9447-797f97e11c82-operator-scripts\") pod \"0e741872-f2ad-40b0-9447-797f97e11c82\" (UID: \"0e741872-f2ad-40b0-9447-797f97e11c82\") " Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.319786 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b-operator-scripts\") pod \"c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b\" (UID: \"c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b\") " Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.319811 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wd7w7\" (UniqueName: \"kubernetes.io/projected/c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b-kube-api-access-wd7w7\") pod \"c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b\" (UID: \"c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b\") " Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.319871 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4vz4\" (UniqueName: \"kubernetes.io/projected/0e741872-f2ad-40b0-9447-797f97e11c82-kube-api-access-v4vz4\") pod \"0e741872-f2ad-40b0-9447-797f97e11c82\" (UID: \"0e741872-f2ad-40b0-9447-797f97e11c82\") " Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.319909 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nl6r7\" (UniqueName: \"kubernetes.io/projected/3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5-kube-api-access-nl6r7\") pod \"3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5\" (UID: \"3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5\") " Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.320213 4990 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/504df873-8902-4568-b465-40d75b755fee-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.320229 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpxmn\" (UniqueName: \"kubernetes.io/projected/504df873-8902-4568-b465-40d75b755fee-kube-api-access-mpxmn\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.321010 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b" (UID: "c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.321073 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/187debb7-c09c-43ee-b6bf-263a1df1d4e0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "187debb7-c09c-43ee-b6bf-263a1df1d4e0" (UID: "187debb7-c09c-43ee-b6bf-263a1df1d4e0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.321540 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e741872-f2ad-40b0-9447-797f97e11c82-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0e741872-f2ad-40b0-9447-797f97e11c82" (UID: "0e741872-f2ad-40b0-9447-797f97e11c82"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.322242 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5" (UID: "3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.323435 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42546ba1-6f6e-437c-90f1-53368b287b1a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "42546ba1-6f6e-437c-90f1-53368b287b1a" (UID: "42546ba1-6f6e-437c-90f1-53368b287b1a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.324234 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42546ba1-6f6e-437c-90f1-53368b287b1a-kube-api-access-jhgvc" (OuterVolumeSpecName: "kube-api-access-jhgvc") pod "42546ba1-6f6e-437c-90f1-53368b287b1a" (UID: "42546ba1-6f6e-437c-90f1-53368b287b1a"). InnerVolumeSpecName "kube-api-access-jhgvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.324795 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5-kube-api-access-nl6r7" (OuterVolumeSpecName: "kube-api-access-nl6r7") pod "3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5" (UID: "3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5"). InnerVolumeSpecName "kube-api-access-nl6r7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.326815 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/187debb7-c09c-43ee-b6bf-263a1df1d4e0-kube-api-access-xpjch" (OuterVolumeSpecName: "kube-api-access-xpjch") pod "187debb7-c09c-43ee-b6bf-263a1df1d4e0" (UID: "187debb7-c09c-43ee-b6bf-263a1df1d4e0"). InnerVolumeSpecName "kube-api-access-xpjch". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.326981 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b-kube-api-access-wd7w7" (OuterVolumeSpecName: "kube-api-access-wd7w7") pod "c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b" (UID: "c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b"). InnerVolumeSpecName "kube-api-access-wd7w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.328270 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e741872-f2ad-40b0-9447-797f97e11c82-kube-api-access-v4vz4" (OuterVolumeSpecName: "kube-api-access-v4vz4") pod "0e741872-f2ad-40b0-9447-797f97e11c82" (UID: "0e741872-f2ad-40b0-9447-797f97e11c82"). InnerVolumeSpecName "kube-api-access-v4vz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.422116 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4vz4\" (UniqueName: \"kubernetes.io/projected/0e741872-f2ad-40b0-9447-797f97e11c82-kube-api-access-v4vz4\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.422330 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nl6r7\" (UniqueName: \"kubernetes.io/projected/3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5-kube-api-access-nl6r7\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.422421 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhgvc\" (UniqueName: \"kubernetes.io/projected/42546ba1-6f6e-437c-90f1-53368b287b1a-kube-api-access-jhgvc\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.422525 4990 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/187debb7-c09c-43ee-b6bf-263a1df1d4e0-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.422615 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpjch\" (UniqueName: \"kubernetes.io/projected/187debb7-c09c-43ee-b6bf-263a1df1d4e0-kube-api-access-xpjch\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.422701 4990 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42546ba1-6f6e-437c-90f1-53368b287b1a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.422776 4990 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.422865 4990 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e741872-f2ad-40b0-9447-797f97e11c82-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.422941 4990 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:49 crc kubenswrapper[4990]: I1205 01:31:49.423019 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wd7w7\" (UniqueName: \"kubernetes.io/projected/c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b-kube-api-access-wd7w7\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:50 crc kubenswrapper[4990]: I1205 01:31:50.020042 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-56hk7" event={"ID":"bfef6189-60ca-4088-97fa-6dc3fb1e1a52","Type":"ContainerStarted","Data":"47cb92293894b144fb1235b48e3276d87fdd7829c486d8d12d6658a335ec3215"} Dec 05 01:31:50 crc kubenswrapper[4990]: I1205 01:31:50.022987 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-phsw5" Dec 05 01:31:50 crc kubenswrapper[4990]: I1205 01:31:50.023075 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a4a4-account-create-update-rvgtj" Dec 05 01:31:50 crc kubenswrapper[4990]: I1205 01:31:50.023088 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-b4mqj" Dec 05 01:31:50 crc kubenswrapper[4990]: I1205 01:31:50.023158 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vm2sd" event={"ID":"3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93","Type":"ContainerStarted","Data":"340420a2025d54544eda878133b62062f1bf33394109ba9351def100e40307e9"} Dec 05 01:31:50 crc kubenswrapper[4990]: I1205 01:31:50.023255 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rm8cx" Dec 05 01:31:50 crc kubenswrapper[4990]: I1205 01:31:50.023310 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c36c-account-create-update-qwx75" Dec 05 01:31:50 crc kubenswrapper[4990]: I1205 01:31:50.023441 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-dd52-account-create-update-gk7b2" Dec 05 01:31:50 crc kubenswrapper[4990]: I1205 01:31:50.052957 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-56hk7" podStartSLOduration=1.712308566 podStartE2EDuration="31.052926806s" podCreationTimestamp="2025-12-05 01:31:19 +0000 UTC" firstStartedPulling="2025-12-05 01:31:20.008932698 +0000 UTC m=+1378.385148059" lastFinishedPulling="2025-12-05 01:31:49.349550938 +0000 UTC m=+1407.725766299" observedRunningTime="2025-12-05 01:31:50.039856345 +0000 UTC m=+1408.416071716" watchObservedRunningTime="2025-12-05 01:31:50.052926806 +0000 UTC m=+1408.429142187" Dec 05 01:31:50 crc kubenswrapper[4990]: I1205 01:31:50.062350 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-vm2sd" podStartSLOduration=2.2679392050000002 podStartE2EDuration="7.062324683s" podCreationTimestamp="2025-12-05 01:31:43 +0000 UTC" firstStartedPulling="2025-12-05 01:31:44.224874588 +0000 UTC m=+1402.601089949" lastFinishedPulling="2025-12-05 01:31:49.019260066 +0000 UTC m=+1407.395475427" observedRunningTime="2025-12-05 01:31:50.059598815 +0000 UTC m=+1408.435814166" watchObservedRunningTime="2025-12-05 01:31:50.062324683 +0000 UTC m=+1408.438540054" Dec 05 01:31:50 crc kubenswrapper[4990]: I1205 01:31:50.476712 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-764c5664d7-r8v6b" Dec 05 01:31:50 crc kubenswrapper[4990]: I1205 01:31:50.561179 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-gml76"] Dec 05 01:31:50 crc kubenswrapper[4990]: I1205 01:31:50.562232 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-gml76" podUID="513896a8-02b3-417f-95a4-7ec45b07b61a" containerName="dnsmasq-dns" containerID="cri-o://81233f1818872534cd91c352e9b720d089f95fa168ff458a88d040368e7d50e1" gracePeriod=10 Dec 05 01:31:50 crc kubenswrapper[4990]: I1205 01:31:50.993423 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-gml76" Dec 05 01:31:51 crc kubenswrapper[4990]: I1205 01:31:51.047012 4990 generic.go:334] "Generic (PLEG): container finished" podID="513896a8-02b3-417f-95a4-7ec45b07b61a" containerID="81233f1818872534cd91c352e9b720d089f95fa168ff458a88d040368e7d50e1" exitCode=0 Dec 05 01:31:51 crc kubenswrapper[4990]: I1205 01:31:51.047090 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-gml76" Dec 05 01:31:51 crc kubenswrapper[4990]: I1205 01:31:51.047113 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-gml76" event={"ID":"513896a8-02b3-417f-95a4-7ec45b07b61a","Type":"ContainerDied","Data":"81233f1818872534cd91c352e9b720d089f95fa168ff458a88d040368e7d50e1"} Dec 05 01:31:51 crc kubenswrapper[4990]: I1205 01:31:51.049060 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-gml76" event={"ID":"513896a8-02b3-417f-95a4-7ec45b07b61a","Type":"ContainerDied","Data":"bf3a21ac9a4285c6e5a0edfea71759ca0368019f2b9f8d256bdacb5ea26f3d7b"} Dec 05 01:31:51 crc kubenswrapper[4990]: I1205 01:31:51.049108 4990 scope.go:117] "RemoveContainer" containerID="81233f1818872534cd91c352e9b720d089f95fa168ff458a88d040368e7d50e1" Dec 05 01:31:51 crc kubenswrapper[4990]: I1205 01:31:51.076417 4990 scope.go:117] "RemoveContainer" containerID="3641fc7885d5143a0e7a9b14bb83558160a391d895ea61b0c6b5a6dfa852065b" Dec 05 01:31:51 crc kubenswrapper[4990]: I1205 01:31:51.119359 4990 scope.go:117] "RemoveContainer" containerID="81233f1818872534cd91c352e9b720d089f95fa168ff458a88d040368e7d50e1" Dec 05 01:31:51 crc kubenswrapper[4990]: E1205 01:31:51.119898 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81233f1818872534cd91c352e9b720d089f95fa168ff458a88d040368e7d50e1\": container with ID starting with 81233f1818872534cd91c352e9b720d089f95fa168ff458a88d040368e7d50e1 not found: ID does not exist" containerID="81233f1818872534cd91c352e9b720d089f95fa168ff458a88d040368e7d50e1" Dec 05 01:31:51 crc kubenswrapper[4990]: I1205 01:31:51.119942 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81233f1818872534cd91c352e9b720d089f95fa168ff458a88d040368e7d50e1"} err="failed to get container status \"81233f1818872534cd91c352e9b720d089f95fa168ff458a88d040368e7d50e1\": rpc error: code = NotFound desc = could not find container \"81233f1818872534cd91c352e9b720d089f95fa168ff458a88d040368e7d50e1\": container with ID starting with 81233f1818872534cd91c352e9b720d089f95fa168ff458a88d040368e7d50e1 not found: ID does not exist" Dec 05 01:31:51 crc kubenswrapper[4990]: I1205 01:31:51.119970 4990 scope.go:117] "RemoveContainer" containerID="3641fc7885d5143a0e7a9b14bb83558160a391d895ea61b0c6b5a6dfa852065b" Dec 05 01:31:51 crc kubenswrapper[4990]: E1205 01:31:51.120446 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3641fc7885d5143a0e7a9b14bb83558160a391d895ea61b0c6b5a6dfa852065b\": container with ID starting with 3641fc7885d5143a0e7a9b14bb83558160a391d895ea61b0c6b5a6dfa852065b not found: ID does not exist" containerID="3641fc7885d5143a0e7a9b14bb83558160a391d895ea61b0c6b5a6dfa852065b" Dec 05 01:31:51 crc kubenswrapper[4990]: I1205 01:31:51.120502 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3641fc7885d5143a0e7a9b14bb83558160a391d895ea61b0c6b5a6dfa852065b"} err="failed to get container status \"3641fc7885d5143a0e7a9b14bb83558160a391d895ea61b0c6b5a6dfa852065b\": rpc error: code = NotFound desc = could not find container \"3641fc7885d5143a0e7a9b14bb83558160a391d895ea61b0c6b5a6dfa852065b\": container with ID starting with 3641fc7885d5143a0e7a9b14bb83558160a391d895ea61b0c6b5a6dfa852065b not found: ID does not exist" Dec 05 01:31:51 crc kubenswrapper[4990]: I1205 01:31:51.158841 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/513896a8-02b3-417f-95a4-7ec45b07b61a-ovsdbserver-sb\") pod \"513896a8-02b3-417f-95a4-7ec45b07b61a\" (UID: \"513896a8-02b3-417f-95a4-7ec45b07b61a\") " Dec 05 01:31:51 crc kubenswrapper[4990]: I1205 01:31:51.158961 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7nct\" (UniqueName: \"kubernetes.io/projected/513896a8-02b3-417f-95a4-7ec45b07b61a-kube-api-access-d7nct\") pod \"513896a8-02b3-417f-95a4-7ec45b07b61a\" (UID: \"513896a8-02b3-417f-95a4-7ec45b07b61a\") " Dec 05 01:31:51 crc kubenswrapper[4990]: I1205 01:31:51.158983 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/513896a8-02b3-417f-95a4-7ec45b07b61a-ovsdbserver-nb\") pod \"513896a8-02b3-417f-95a4-7ec45b07b61a\" (UID: \"513896a8-02b3-417f-95a4-7ec45b07b61a\") " Dec 05 01:31:51 crc kubenswrapper[4990]: I1205 01:31:51.159075 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/513896a8-02b3-417f-95a4-7ec45b07b61a-dns-svc\") pod \"513896a8-02b3-417f-95a4-7ec45b07b61a\" (UID: \"513896a8-02b3-417f-95a4-7ec45b07b61a\") " Dec 05 01:31:51 crc kubenswrapper[4990]: I1205 01:31:51.159131 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/513896a8-02b3-417f-95a4-7ec45b07b61a-config\") pod \"513896a8-02b3-417f-95a4-7ec45b07b61a\" (UID: \"513896a8-02b3-417f-95a4-7ec45b07b61a\") " Dec 05 01:31:51 crc kubenswrapper[4990]: I1205 01:31:51.166773 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/513896a8-02b3-417f-95a4-7ec45b07b61a-kube-api-access-d7nct" (OuterVolumeSpecName: "kube-api-access-d7nct") pod "513896a8-02b3-417f-95a4-7ec45b07b61a" (UID: "513896a8-02b3-417f-95a4-7ec45b07b61a"). InnerVolumeSpecName "kube-api-access-d7nct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:31:51 crc kubenswrapper[4990]: I1205 01:31:51.214084 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/513896a8-02b3-417f-95a4-7ec45b07b61a-config" (OuterVolumeSpecName: "config") pod "513896a8-02b3-417f-95a4-7ec45b07b61a" (UID: "513896a8-02b3-417f-95a4-7ec45b07b61a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:51 crc kubenswrapper[4990]: I1205 01:31:51.216730 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/513896a8-02b3-417f-95a4-7ec45b07b61a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "513896a8-02b3-417f-95a4-7ec45b07b61a" (UID: "513896a8-02b3-417f-95a4-7ec45b07b61a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:51 crc kubenswrapper[4990]: I1205 01:31:51.220228 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/513896a8-02b3-417f-95a4-7ec45b07b61a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "513896a8-02b3-417f-95a4-7ec45b07b61a" (UID: "513896a8-02b3-417f-95a4-7ec45b07b61a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:51 crc kubenswrapper[4990]: I1205 01:31:51.240120 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/513896a8-02b3-417f-95a4-7ec45b07b61a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "513896a8-02b3-417f-95a4-7ec45b07b61a" (UID: "513896a8-02b3-417f-95a4-7ec45b07b61a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:51 crc kubenswrapper[4990]: I1205 01:31:51.261318 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7nct\" (UniqueName: \"kubernetes.io/projected/513896a8-02b3-417f-95a4-7ec45b07b61a-kube-api-access-d7nct\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:51 crc kubenswrapper[4990]: I1205 01:31:51.261668 4990 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/513896a8-02b3-417f-95a4-7ec45b07b61a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:51 crc kubenswrapper[4990]: I1205 01:31:51.261770 4990 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/513896a8-02b3-417f-95a4-7ec45b07b61a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:51 crc kubenswrapper[4990]: I1205 01:31:51.261867 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/513896a8-02b3-417f-95a4-7ec45b07b61a-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:51 crc kubenswrapper[4990]: I1205 01:31:51.261958 4990 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/513896a8-02b3-417f-95a4-7ec45b07b61a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:51 crc kubenswrapper[4990]: I1205 01:31:51.380756 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-gml76"] Dec 05 01:31:51 crc kubenswrapper[4990]: I1205 01:31:51.387273 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-gml76"] Dec 05 01:31:51 crc kubenswrapper[4990]: I1205 01:31:51.823693 4990 patch_prober.go:28] interesting pod/machine-config-daemon-zxlh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 01:31:51 crc kubenswrapper[4990]: I1205 01:31:51.824230 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 01:31:51 crc kubenswrapper[4990]: I1205 01:31:51.946812 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="513896a8-02b3-417f-95a4-7ec45b07b61a" path="/var/lib/kubelet/pods/513896a8-02b3-417f-95a4-7ec45b07b61a/volumes" Dec 05 01:31:53 crc kubenswrapper[4990]: I1205 01:31:53.067599 4990 generic.go:334] "Generic (PLEG): container finished" podID="3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93" containerID="340420a2025d54544eda878133b62062f1bf33394109ba9351def100e40307e9" exitCode=0 Dec 05 01:31:53 crc kubenswrapper[4990]: I1205 01:31:53.067646 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vm2sd" event={"ID":"3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93","Type":"ContainerDied","Data":"340420a2025d54544eda878133b62062f1bf33394109ba9351def100e40307e9"} Dec 05 01:31:54 crc kubenswrapper[4990]: I1205 01:31:54.424622 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vm2sd" Dec 05 01:31:54 crc kubenswrapper[4990]: I1205 01:31:54.514259 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93-config-data\") pod \"3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93\" (UID: \"3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93\") " Dec 05 01:31:54 crc kubenswrapper[4990]: I1205 01:31:54.514378 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rlsx\" (UniqueName: \"kubernetes.io/projected/3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93-kube-api-access-5rlsx\") pod \"3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93\" (UID: \"3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93\") " Dec 05 01:31:54 crc kubenswrapper[4990]: I1205 01:31:54.514456 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93-combined-ca-bundle\") pod \"3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93\" (UID: \"3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93\") " Dec 05 01:31:54 crc kubenswrapper[4990]: I1205 01:31:54.520120 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93-kube-api-access-5rlsx" (OuterVolumeSpecName: "kube-api-access-5rlsx") pod "3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93" (UID: "3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93"). InnerVolumeSpecName "kube-api-access-5rlsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:31:54 crc kubenswrapper[4990]: I1205 01:31:54.539306 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93" (UID: "3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:31:54 crc kubenswrapper[4990]: I1205 01:31:54.567633 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93-config-data" (OuterVolumeSpecName: "config-data") pod "3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93" (UID: "3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:31:54 crc kubenswrapper[4990]: I1205 01:31:54.616159 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:54 crc kubenswrapper[4990]: I1205 01:31:54.616196 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rlsx\" (UniqueName: \"kubernetes.io/projected/3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93-kube-api-access-5rlsx\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:54 crc kubenswrapper[4990]: I1205 01:31:54.616210 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.092218 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vm2sd" event={"ID":"3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93","Type":"ContainerDied","Data":"e1cf77f7b199fe05b887e674788cedc411884e73403f29cbfd5b87f770de98a0"} Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.092274 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1cf77f7b199fe05b887e674788cedc411884e73403f29cbfd5b87f770de98a0" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.092323 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vm2sd" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.342785 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-nwqdp"] Dec 05 01:31:55 crc kubenswrapper[4990]: E1205 01:31:55.343364 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="187debb7-c09c-43ee-b6bf-263a1df1d4e0" containerName="mariadb-database-create" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.343386 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="187debb7-c09c-43ee-b6bf-263a1df1d4e0" containerName="mariadb-database-create" Dec 05 01:31:55 crc kubenswrapper[4990]: E1205 01:31:55.343403 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="513896a8-02b3-417f-95a4-7ec45b07b61a" containerName="dnsmasq-dns" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.343411 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="513896a8-02b3-417f-95a4-7ec45b07b61a" containerName="dnsmasq-dns" Dec 05 01:31:55 crc kubenswrapper[4990]: E1205 01:31:55.343422 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="504df873-8902-4568-b465-40d75b755fee" containerName="mariadb-account-create-update" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.343428 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="504df873-8902-4568-b465-40d75b755fee" containerName="mariadb-account-create-update" Dec 05 01:31:55 crc kubenswrapper[4990]: E1205 01:31:55.343437 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42546ba1-6f6e-437c-90f1-53368b287b1a" containerName="mariadb-database-create" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.343442 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="42546ba1-6f6e-437c-90f1-53368b287b1a" containerName="mariadb-database-create" Dec 05 01:31:55 crc kubenswrapper[4990]: E1205 01:31:55.343456 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e741872-f2ad-40b0-9447-797f97e11c82" containerName="mariadb-account-create-update" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.343462 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e741872-f2ad-40b0-9447-797f97e11c82" containerName="mariadb-account-create-update" Dec 05 01:31:55 crc kubenswrapper[4990]: E1205 01:31:55.343493 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93" containerName="keystone-db-sync" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.343499 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93" containerName="keystone-db-sync" Dec 05 01:31:55 crc kubenswrapper[4990]: E1205 01:31:55.343521 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b" containerName="mariadb-database-create" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.343527 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b" containerName="mariadb-database-create" Dec 05 01:31:55 crc kubenswrapper[4990]: E1205 01:31:55.343537 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5" containerName="mariadb-account-create-update" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.343542 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5" containerName="mariadb-account-create-update" Dec 05 01:31:55 crc kubenswrapper[4990]: E1205 01:31:55.343550 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="513896a8-02b3-417f-95a4-7ec45b07b61a" containerName="init" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.343555 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="513896a8-02b3-417f-95a4-7ec45b07b61a" containerName="init" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.343706 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5" containerName="mariadb-account-create-update" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.343721 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="504df873-8902-4568-b465-40d75b755fee" containerName="mariadb-account-create-update" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.343737 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93" containerName="keystone-db-sync" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.343748 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="187debb7-c09c-43ee-b6bf-263a1df1d4e0" containerName="mariadb-database-create" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.343756 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="513896a8-02b3-417f-95a4-7ec45b07b61a" containerName="dnsmasq-dns" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.343765 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e741872-f2ad-40b0-9447-797f97e11c82" containerName="mariadb-account-create-update" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.343774 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b" containerName="mariadb-database-create" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.343781 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="42546ba1-6f6e-437c-90f1-53368b287b1a" containerName="mariadb-database-create" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.344582 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-nwqdp" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.381920 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-nwqdp"] Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.436005 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d454c127-1f92-438f-a5a7-475694c13887-config\") pod \"dnsmasq-dns-5959f8865f-nwqdp\" (UID: \"d454c127-1f92-438f-a5a7-475694c13887\") " pod="openstack/dnsmasq-dns-5959f8865f-nwqdp" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.436101 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d454c127-1f92-438f-a5a7-475694c13887-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-nwqdp\" (UID: \"d454c127-1f92-438f-a5a7-475694c13887\") " pod="openstack/dnsmasq-dns-5959f8865f-nwqdp" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.436152 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d454c127-1f92-438f-a5a7-475694c13887-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-nwqdp\" (UID: \"d454c127-1f92-438f-a5a7-475694c13887\") " pod="openstack/dnsmasq-dns-5959f8865f-nwqdp" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.436174 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd7tv\" (UniqueName: \"kubernetes.io/projected/d454c127-1f92-438f-a5a7-475694c13887-kube-api-access-dd7tv\") pod \"dnsmasq-dns-5959f8865f-nwqdp\" (UID: \"d454c127-1f92-438f-a5a7-475694c13887\") " pod="openstack/dnsmasq-dns-5959f8865f-nwqdp" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.436396 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d454c127-1f92-438f-a5a7-475694c13887-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-nwqdp\" (UID: \"d454c127-1f92-438f-a5a7-475694c13887\") " pod="openstack/dnsmasq-dns-5959f8865f-nwqdp" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.436578 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d454c127-1f92-438f-a5a7-475694c13887-dns-svc\") pod \"dnsmasq-dns-5959f8865f-nwqdp\" (UID: \"d454c127-1f92-438f-a5a7-475694c13887\") " pod="openstack/dnsmasq-dns-5959f8865f-nwqdp" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.460496 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-r7rb6"] Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.461435 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r7rb6" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.464311 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.464467 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.475748 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2q9vq" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.479743 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.479806 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.498895 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-r7rb6"] Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.538205 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d454c127-1f92-438f-a5a7-475694c13887-config\") pod \"dnsmasq-dns-5959f8865f-nwqdp\" (UID: \"d454c127-1f92-438f-a5a7-475694c13887\") " pod="openstack/dnsmasq-dns-5959f8865f-nwqdp" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.538267 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h72g\" (UniqueName: \"kubernetes.io/projected/defcff17-042a-43c3-a963-3f3667abd371-kube-api-access-5h72g\") pod \"keystone-bootstrap-r7rb6\" (UID: \"defcff17-042a-43c3-a963-3f3667abd371\") " pod="openstack/keystone-bootstrap-r7rb6" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.538300 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d454c127-1f92-438f-a5a7-475694c13887-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-nwqdp\" (UID: \"d454c127-1f92-438f-a5a7-475694c13887\") " pod="openstack/dnsmasq-dns-5959f8865f-nwqdp" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.538331 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d454c127-1f92-438f-a5a7-475694c13887-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-nwqdp\" (UID: \"d454c127-1f92-438f-a5a7-475694c13887\") " pod="openstack/dnsmasq-dns-5959f8865f-nwqdp" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.538353 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd7tv\" (UniqueName: \"kubernetes.io/projected/d454c127-1f92-438f-a5a7-475694c13887-kube-api-access-dd7tv\") pod \"dnsmasq-dns-5959f8865f-nwqdp\" (UID: \"d454c127-1f92-438f-a5a7-475694c13887\") " pod="openstack/dnsmasq-dns-5959f8865f-nwqdp" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.538374 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/defcff17-042a-43c3-a963-3f3667abd371-config-data\") pod \"keystone-bootstrap-r7rb6\" (UID: \"defcff17-042a-43c3-a963-3f3667abd371\") " pod="openstack/keystone-bootstrap-r7rb6" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.538390 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defcff17-042a-43c3-a963-3f3667abd371-combined-ca-bundle\") pod \"keystone-bootstrap-r7rb6\" (UID: \"defcff17-042a-43c3-a963-3f3667abd371\") " pod="openstack/keystone-bootstrap-r7rb6" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.538434 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d454c127-1f92-438f-a5a7-475694c13887-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-nwqdp\" (UID: \"d454c127-1f92-438f-a5a7-475694c13887\") " pod="openstack/dnsmasq-dns-5959f8865f-nwqdp" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.538454 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/defcff17-042a-43c3-a963-3f3667abd371-scripts\") pod \"keystone-bootstrap-r7rb6\" (UID: \"defcff17-042a-43c3-a963-3f3667abd371\") " pod="openstack/keystone-bootstrap-r7rb6" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.538507 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/defcff17-042a-43c3-a963-3f3667abd371-credential-keys\") pod \"keystone-bootstrap-r7rb6\" (UID: \"defcff17-042a-43c3-a963-3f3667abd371\") " pod="openstack/keystone-bootstrap-r7rb6" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.538528 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d454c127-1f92-438f-a5a7-475694c13887-dns-svc\") pod \"dnsmasq-dns-5959f8865f-nwqdp\" (UID: \"d454c127-1f92-438f-a5a7-475694c13887\") " pod="openstack/dnsmasq-dns-5959f8865f-nwqdp" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.538552 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/defcff17-042a-43c3-a963-3f3667abd371-fernet-keys\") pod \"keystone-bootstrap-r7rb6\" (UID: \"defcff17-042a-43c3-a963-3f3667abd371\") " pod="openstack/keystone-bootstrap-r7rb6" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.539450 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d454c127-1f92-438f-a5a7-475694c13887-config\") pod \"dnsmasq-dns-5959f8865f-nwqdp\" (UID: \"d454c127-1f92-438f-a5a7-475694c13887\") " pod="openstack/dnsmasq-dns-5959f8865f-nwqdp" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.539982 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d454c127-1f92-438f-a5a7-475694c13887-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-nwqdp\" (UID: \"d454c127-1f92-438f-a5a7-475694c13887\") " pod="openstack/dnsmasq-dns-5959f8865f-nwqdp" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.540467 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d454c127-1f92-438f-a5a7-475694c13887-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-nwqdp\" (UID: \"d454c127-1f92-438f-a5a7-475694c13887\") " pod="openstack/dnsmasq-dns-5959f8865f-nwqdp" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.541283 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d454c127-1f92-438f-a5a7-475694c13887-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-nwqdp\" (UID: \"d454c127-1f92-438f-a5a7-475694c13887\") " pod="openstack/dnsmasq-dns-5959f8865f-nwqdp" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.541916 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d454c127-1f92-438f-a5a7-475694c13887-dns-svc\") pod \"dnsmasq-dns-5959f8865f-nwqdp\" (UID: \"d454c127-1f92-438f-a5a7-475694c13887\") " pod="openstack/dnsmasq-dns-5959f8865f-nwqdp" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.567387 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd7tv\" (UniqueName: \"kubernetes.io/projected/d454c127-1f92-438f-a5a7-475694c13887-kube-api-access-dd7tv\") pod \"dnsmasq-dns-5959f8865f-nwqdp\" (UID: \"d454c127-1f92-438f-a5a7-475694c13887\") " pod="openstack/dnsmasq-dns-5959f8865f-nwqdp" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.639569 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/defcff17-042a-43c3-a963-3f3667abd371-scripts\") pod \"keystone-bootstrap-r7rb6\" (UID: \"defcff17-042a-43c3-a963-3f3667abd371\") " pod="openstack/keystone-bootstrap-r7rb6" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.639633 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/defcff17-042a-43c3-a963-3f3667abd371-credential-keys\") pod \"keystone-bootstrap-r7rb6\" (UID: \"defcff17-042a-43c3-a963-3f3667abd371\") " pod="openstack/keystone-bootstrap-r7rb6" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.639662 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/defcff17-042a-43c3-a963-3f3667abd371-fernet-keys\") pod \"keystone-bootstrap-r7rb6\" (UID: \"defcff17-042a-43c3-a963-3f3667abd371\") " pod="openstack/keystone-bootstrap-r7rb6" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.639714 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h72g\" (UniqueName: \"kubernetes.io/projected/defcff17-042a-43c3-a963-3f3667abd371-kube-api-access-5h72g\") pod \"keystone-bootstrap-r7rb6\" (UID: \"defcff17-042a-43c3-a963-3f3667abd371\") " pod="openstack/keystone-bootstrap-r7rb6" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.639758 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/defcff17-042a-43c3-a963-3f3667abd371-config-data\") pod \"keystone-bootstrap-r7rb6\" (UID: \"defcff17-042a-43c3-a963-3f3667abd371\") " pod="openstack/keystone-bootstrap-r7rb6" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.639774 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defcff17-042a-43c3-a963-3f3667abd371-combined-ca-bundle\") pod \"keystone-bootstrap-r7rb6\" (UID: \"defcff17-042a-43c3-a963-3f3667abd371\") " pod="openstack/keystone-bootstrap-r7rb6" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.648901 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/defcff17-042a-43c3-a963-3f3667abd371-scripts\") pod \"keystone-bootstrap-r7rb6\" (UID: \"defcff17-042a-43c3-a963-3f3667abd371\") " pod="openstack/keystone-bootstrap-r7rb6" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.650100 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defcff17-042a-43c3-a963-3f3667abd371-combined-ca-bundle\") pod \"keystone-bootstrap-r7rb6\" (UID: \"defcff17-042a-43c3-a963-3f3667abd371\") " pod="openstack/keystone-bootstrap-r7rb6" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.652004 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/defcff17-042a-43c3-a963-3f3667abd371-credential-keys\") pod \"keystone-bootstrap-r7rb6\" (UID: \"defcff17-042a-43c3-a963-3f3667abd371\") " pod="openstack/keystone-bootstrap-r7rb6" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.656199 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/defcff17-042a-43c3-a963-3f3667abd371-config-data\") pod \"keystone-bootstrap-r7rb6\" (UID: \"defcff17-042a-43c3-a963-3f3667abd371\") " pod="openstack/keystone-bootstrap-r7rb6" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.666830 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-nwqdp" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.679516 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/defcff17-042a-43c3-a963-3f3667abd371-fernet-keys\") pod \"keystone-bootstrap-r7rb6\" (UID: \"defcff17-042a-43c3-a963-3f3667abd371\") " pod="openstack/keystone-bootstrap-r7rb6" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.680155 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h72g\" (UniqueName: \"kubernetes.io/projected/defcff17-042a-43c3-a963-3f3667abd371-kube-api-access-5h72g\") pod \"keystone-bootstrap-r7rb6\" (UID: \"defcff17-042a-43c3-a963-3f3667abd371\") " pod="openstack/keystone-bootstrap-r7rb6" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.787980 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r7rb6" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.799728 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.801596 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.810267 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.810435 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.832822 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.885477 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-2qddb"] Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.887586 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2qddb" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.890798 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.891637 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.891990 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-qv77m" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.906536 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-fr28q"] Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.907607 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fr28q" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.913108 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.913346 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.913592 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-zpj4v" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.923143 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-kzf4n"] Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.924140 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kzf4n" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.929471 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.929679 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-tvmcj" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.950549 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-2qddb"] Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.962424 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-kzf4n"] Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.968211 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-fr28q"] Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.985275 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-n9mqs"] Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.986658 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/67d8cea8-3def-4f61-838e-36ffae0c8705-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"67d8cea8-3def-4f61-838e-36ffae0c8705\") " pod="openstack/ceilometer-0" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.986687 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cf00e7d-d396-4719-b077-bd14781d8836-config-data\") pod \"cinder-db-sync-2qddb\" (UID: \"1cf00e7d-d396-4719-b077-bd14781d8836\") " pod="openstack/cinder-db-sync-2qddb" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.986706 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67d8cea8-3def-4f61-838e-36ffae0c8705-config-data\") pod \"ceilometer-0\" (UID: \"67d8cea8-3def-4f61-838e-36ffae0c8705\") " pod="openstack/ceilometer-0" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.986728 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67d8cea8-3def-4f61-838e-36ffae0c8705-run-httpd\") pod \"ceilometer-0\" (UID: \"67d8cea8-3def-4f61-838e-36ffae0c8705\") " pod="openstack/ceilometer-0" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.986744 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67d8cea8-3def-4f61-838e-36ffae0c8705-log-httpd\") pod \"ceilometer-0\" (UID: \"67d8cea8-3def-4f61-838e-36ffae0c8705\") " pod="openstack/ceilometer-0" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.986759 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzgg4\" (UniqueName: \"kubernetes.io/projected/67d8cea8-3def-4f61-838e-36ffae0c8705-kube-api-access-kzgg4\") pod \"ceilometer-0\" (UID: \"67d8cea8-3def-4f61-838e-36ffae0c8705\") " pod="openstack/ceilometer-0" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.986791 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1cf00e7d-d396-4719-b077-bd14781d8836-etc-machine-id\") pod \"cinder-db-sync-2qddb\" (UID: \"1cf00e7d-d396-4719-b077-bd14781d8836\") " pod="openstack/cinder-db-sync-2qddb" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.986811 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67d8cea8-3def-4f61-838e-36ffae0c8705-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"67d8cea8-3def-4f61-838e-36ffae0c8705\") " pod="openstack/ceilometer-0" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.986834 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1cf00e7d-d396-4719-b077-bd14781d8836-db-sync-config-data\") pod \"cinder-db-sync-2qddb\" (UID: \"1cf00e7d-d396-4719-b077-bd14781d8836\") " pod="openstack/cinder-db-sync-2qddb" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.986856 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67d8cea8-3def-4f61-838e-36ffae0c8705-scripts\") pod \"ceilometer-0\" (UID: \"67d8cea8-3def-4f61-838e-36ffae0c8705\") " pod="openstack/ceilometer-0" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.986889 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cf00e7d-d396-4719-b077-bd14781d8836-combined-ca-bundle\") pod \"cinder-db-sync-2qddb\" (UID: \"1cf00e7d-d396-4719-b077-bd14781d8836\") " pod="openstack/cinder-db-sync-2qddb" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.986923 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddlqr\" (UniqueName: \"kubernetes.io/projected/1cf00e7d-d396-4719-b077-bd14781d8836-kube-api-access-ddlqr\") pod \"cinder-db-sync-2qddb\" (UID: \"1cf00e7d-d396-4719-b077-bd14781d8836\") " pod="openstack/cinder-db-sync-2qddb" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.986961 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cf00e7d-d396-4719-b077-bd14781d8836-scripts\") pod \"cinder-db-sync-2qddb\" (UID: \"1cf00e7d-d396-4719-b077-bd14781d8836\") " pod="openstack/cinder-db-sync-2qddb" Dec 05 01:31:55 crc kubenswrapper[4990]: I1205 01:31:55.987270 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-n9mqs" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.003758 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-rq8p4" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.003967 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.004151 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.035927 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-nwqdp"] Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.055057 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-n9mqs"] Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.090768 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-fd2h7"] Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.091755 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddlqr\" (UniqueName: \"kubernetes.io/projected/1cf00e7d-d396-4719-b077-bd14781d8836-kube-api-access-ddlqr\") pod \"cinder-db-sync-2qddb\" (UID: \"1cf00e7d-d396-4719-b077-bd14781d8836\") " pod="openstack/cinder-db-sync-2qddb" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.091803 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04c2a959-3818-40b2-8182-7fa3287ee0df-combined-ca-bundle\") pod \"neutron-db-sync-fr28q\" (UID: \"04c2a959-3818-40b2-8182-7fa3287ee0df\") " pod="openstack/neutron-db-sync-fr28q" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.091823 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da5e4277-78a0-4eca-b9f6-67fc6c925ed1-logs\") pod \"placement-db-sync-n9mqs\" (UID: \"da5e4277-78a0-4eca-b9f6-67fc6c925ed1\") " pod="openstack/placement-db-sync-n9mqs" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.091840 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5be3dfc-61e4-495c-8b0b-22f417664a9c-combined-ca-bundle\") pod \"barbican-db-sync-kzf4n\" (UID: \"d5be3dfc-61e4-495c-8b0b-22f417664a9c\") " pod="openstack/barbican-db-sync-kzf4n" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.091862 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hlzk\" (UniqueName: \"kubernetes.io/projected/d5be3dfc-61e4-495c-8b0b-22f417664a9c-kube-api-access-4hlzk\") pod \"barbican-db-sync-kzf4n\" (UID: \"d5be3dfc-61e4-495c-8b0b-22f417664a9c\") " pod="openstack/barbican-db-sync-kzf4n" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.091894 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da5e4277-78a0-4eca-b9f6-67fc6c925ed1-config-data\") pod \"placement-db-sync-n9mqs\" (UID: \"da5e4277-78a0-4eca-b9f6-67fc6c925ed1\") " pod="openstack/placement-db-sync-n9mqs" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.091922 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cf00e7d-d396-4719-b077-bd14781d8836-scripts\") pod \"cinder-db-sync-2qddb\" (UID: \"1cf00e7d-d396-4719-b077-bd14781d8836\") " pod="openstack/cinder-db-sync-2qddb" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.091950 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da5e4277-78a0-4eca-b9f6-67fc6c925ed1-scripts\") pod \"placement-db-sync-n9mqs\" (UID: \"da5e4277-78a0-4eca-b9f6-67fc6c925ed1\") " pod="openstack/placement-db-sync-n9mqs" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.091983 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/67d8cea8-3def-4f61-838e-36ffae0c8705-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"67d8cea8-3def-4f61-838e-36ffae0c8705\") " pod="openstack/ceilometer-0" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.092001 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cf00e7d-d396-4719-b077-bd14781d8836-config-data\") pod \"cinder-db-sync-2qddb\" (UID: \"1cf00e7d-d396-4719-b077-bd14781d8836\") " pod="openstack/cinder-db-sync-2qddb" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.092020 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67d8cea8-3def-4f61-838e-36ffae0c8705-config-data\") pod \"ceilometer-0\" (UID: \"67d8cea8-3def-4f61-838e-36ffae0c8705\") " pod="openstack/ceilometer-0" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.092036 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da5e4277-78a0-4eca-b9f6-67fc6c925ed1-combined-ca-bundle\") pod \"placement-db-sync-n9mqs\" (UID: \"da5e4277-78a0-4eca-b9f6-67fc6c925ed1\") " pod="openstack/placement-db-sync-n9mqs" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.092065 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67d8cea8-3def-4f61-838e-36ffae0c8705-run-httpd\") pod \"ceilometer-0\" (UID: \"67d8cea8-3def-4f61-838e-36ffae0c8705\") " pod="openstack/ceilometer-0" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.092092 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzgg4\" (UniqueName: \"kubernetes.io/projected/67d8cea8-3def-4f61-838e-36ffae0c8705-kube-api-access-kzgg4\") pod \"ceilometer-0\" (UID: \"67d8cea8-3def-4f61-838e-36ffae0c8705\") " pod="openstack/ceilometer-0" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.092112 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67d8cea8-3def-4f61-838e-36ffae0c8705-log-httpd\") pod \"ceilometer-0\" (UID: \"67d8cea8-3def-4f61-838e-36ffae0c8705\") " pod="openstack/ceilometer-0" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.092137 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/04c2a959-3818-40b2-8182-7fa3287ee0df-config\") pod \"neutron-db-sync-fr28q\" (UID: \"04c2a959-3818-40b2-8182-7fa3287ee0df\") " pod="openstack/neutron-db-sync-fr28q" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.092162 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpvl4\" (UniqueName: \"kubernetes.io/projected/da5e4277-78a0-4eca-b9f6-67fc6c925ed1-kube-api-access-mpvl4\") pod \"placement-db-sync-n9mqs\" (UID: \"da5e4277-78a0-4eca-b9f6-67fc6c925ed1\") " pod="openstack/placement-db-sync-n9mqs" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.092191 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1cf00e7d-d396-4719-b077-bd14781d8836-etc-machine-id\") pod \"cinder-db-sync-2qddb\" (UID: \"1cf00e7d-d396-4719-b077-bd14781d8836\") " pod="openstack/cinder-db-sync-2qddb" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.092208 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-fd2h7" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.092209 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5be3dfc-61e4-495c-8b0b-22f417664a9c-db-sync-config-data\") pod \"barbican-db-sync-kzf4n\" (UID: \"d5be3dfc-61e4-495c-8b0b-22f417664a9c\") " pod="openstack/barbican-db-sync-kzf4n" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.092664 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67d8cea8-3def-4f61-838e-36ffae0c8705-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"67d8cea8-3def-4f61-838e-36ffae0c8705\") " pod="openstack/ceilometer-0" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.092689 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1cf00e7d-d396-4719-b077-bd14781d8836-db-sync-config-data\") pod \"cinder-db-sync-2qddb\" (UID: \"1cf00e7d-d396-4719-b077-bd14781d8836\") " pod="openstack/cinder-db-sync-2qddb" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.092713 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67d8cea8-3def-4f61-838e-36ffae0c8705-scripts\") pod \"ceilometer-0\" (UID: \"67d8cea8-3def-4f61-838e-36ffae0c8705\") " pod="openstack/ceilometer-0" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.092780 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2z5g\" (UniqueName: \"kubernetes.io/projected/04c2a959-3818-40b2-8182-7fa3287ee0df-kube-api-access-w2z5g\") pod \"neutron-db-sync-fr28q\" (UID: \"04c2a959-3818-40b2-8182-7fa3287ee0df\") " pod="openstack/neutron-db-sync-fr28q" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.092808 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cf00e7d-d396-4719-b077-bd14781d8836-combined-ca-bundle\") pod \"cinder-db-sync-2qddb\" (UID: \"1cf00e7d-d396-4719-b077-bd14781d8836\") " pod="openstack/cinder-db-sync-2qddb" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.093376 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67d8cea8-3def-4f61-838e-36ffae0c8705-run-httpd\") pod \"ceilometer-0\" (UID: \"67d8cea8-3def-4f61-838e-36ffae0c8705\") " pod="openstack/ceilometer-0" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.094593 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1cf00e7d-d396-4719-b077-bd14781d8836-etc-machine-id\") pod \"cinder-db-sync-2qddb\" (UID: \"1cf00e7d-d396-4719-b077-bd14781d8836\") " pod="openstack/cinder-db-sync-2qddb" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.094861 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67d8cea8-3def-4f61-838e-36ffae0c8705-log-httpd\") pod \"ceilometer-0\" (UID: \"67d8cea8-3def-4f61-838e-36ffae0c8705\") " pod="openstack/ceilometer-0" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.099181 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-fd2h7"] Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.103514 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/67d8cea8-3def-4f61-838e-36ffae0c8705-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"67d8cea8-3def-4f61-838e-36ffae0c8705\") " pod="openstack/ceilometer-0" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.104146 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67d8cea8-3def-4f61-838e-36ffae0c8705-config-data\") pod \"ceilometer-0\" (UID: \"67d8cea8-3def-4f61-838e-36ffae0c8705\") " pod="openstack/ceilometer-0" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.116848 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67d8cea8-3def-4f61-838e-36ffae0c8705-scripts\") pod \"ceilometer-0\" (UID: \"67d8cea8-3def-4f61-838e-36ffae0c8705\") " pod="openstack/ceilometer-0" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.116871 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67d8cea8-3def-4f61-838e-36ffae0c8705-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"67d8cea8-3def-4f61-838e-36ffae0c8705\") " pod="openstack/ceilometer-0" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.117198 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cf00e7d-d396-4719-b077-bd14781d8836-config-data\") pod \"cinder-db-sync-2qddb\" (UID: \"1cf00e7d-d396-4719-b077-bd14781d8836\") " pod="openstack/cinder-db-sync-2qddb" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.117390 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzgg4\" (UniqueName: \"kubernetes.io/projected/67d8cea8-3def-4f61-838e-36ffae0c8705-kube-api-access-kzgg4\") pod \"ceilometer-0\" (UID: \"67d8cea8-3def-4f61-838e-36ffae0c8705\") " pod="openstack/ceilometer-0" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.117408 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1cf00e7d-d396-4719-b077-bd14781d8836-db-sync-config-data\") pod \"cinder-db-sync-2qddb\" (UID: \"1cf00e7d-d396-4719-b077-bd14781d8836\") " pod="openstack/cinder-db-sync-2qddb" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.117700 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cf00e7d-d396-4719-b077-bd14781d8836-scripts\") pod \"cinder-db-sync-2qddb\" (UID: \"1cf00e7d-d396-4719-b077-bd14781d8836\") " pod="openstack/cinder-db-sync-2qddb" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.117963 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cf00e7d-d396-4719-b077-bd14781d8836-combined-ca-bundle\") pod \"cinder-db-sync-2qddb\" (UID: \"1cf00e7d-d396-4719-b077-bd14781d8836\") " pod="openstack/cinder-db-sync-2qddb" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.124417 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddlqr\" (UniqueName: \"kubernetes.io/projected/1cf00e7d-d396-4719-b077-bd14781d8836-kube-api-access-ddlqr\") pod \"cinder-db-sync-2qddb\" (UID: \"1cf00e7d-d396-4719-b077-bd14781d8836\") " pod="openstack/cinder-db-sync-2qddb" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.194876 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5be3dfc-61e4-495c-8b0b-22f417664a9c-db-sync-config-data\") pod \"barbican-db-sync-kzf4n\" (UID: \"d5be3dfc-61e4-495c-8b0b-22f417664a9c\") " pod="openstack/barbican-db-sync-kzf4n" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.194946 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a0238f4-33e8-4123-9395-a266d5dce8e2-config\") pod \"dnsmasq-dns-58dd9ff6bc-fd2h7\" (UID: \"3a0238f4-33e8-4123-9395-a266d5dce8e2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-fd2h7" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.194969 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2z5g\" (UniqueName: \"kubernetes.io/projected/04c2a959-3818-40b2-8182-7fa3287ee0df-kube-api-access-w2z5g\") pod \"neutron-db-sync-fr28q\" (UID: \"04c2a959-3818-40b2-8182-7fa3287ee0df\") " pod="openstack/neutron-db-sync-fr28q" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.194986 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a0238f4-33e8-4123-9395-a266d5dce8e2-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-fd2h7\" (UID: \"3a0238f4-33e8-4123-9395-a266d5dce8e2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-fd2h7" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.195020 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04c2a959-3818-40b2-8182-7fa3287ee0df-combined-ca-bundle\") pod \"neutron-db-sync-fr28q\" (UID: \"04c2a959-3818-40b2-8182-7fa3287ee0df\") " pod="openstack/neutron-db-sync-fr28q" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.195037 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da5e4277-78a0-4eca-b9f6-67fc6c925ed1-logs\") pod \"placement-db-sync-n9mqs\" (UID: \"da5e4277-78a0-4eca-b9f6-67fc6c925ed1\") " pod="openstack/placement-db-sync-n9mqs" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.195053 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5be3dfc-61e4-495c-8b0b-22f417664a9c-combined-ca-bundle\") pod \"barbican-db-sync-kzf4n\" (UID: \"d5be3dfc-61e4-495c-8b0b-22f417664a9c\") " pod="openstack/barbican-db-sync-kzf4n" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.195069 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hlzk\" (UniqueName: \"kubernetes.io/projected/d5be3dfc-61e4-495c-8b0b-22f417664a9c-kube-api-access-4hlzk\") pod \"barbican-db-sync-kzf4n\" (UID: \"d5be3dfc-61e4-495c-8b0b-22f417664a9c\") " pod="openstack/barbican-db-sync-kzf4n" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.195085 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a0238f4-33e8-4123-9395-a266d5dce8e2-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-fd2h7\" (UID: \"3a0238f4-33e8-4123-9395-a266d5dce8e2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-fd2h7" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.195115 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm4z9\" (UniqueName: \"kubernetes.io/projected/3a0238f4-33e8-4123-9395-a266d5dce8e2-kube-api-access-jm4z9\") pod \"dnsmasq-dns-58dd9ff6bc-fd2h7\" (UID: \"3a0238f4-33e8-4123-9395-a266d5dce8e2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-fd2h7" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.195137 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da5e4277-78a0-4eca-b9f6-67fc6c925ed1-config-data\") pod \"placement-db-sync-n9mqs\" (UID: \"da5e4277-78a0-4eca-b9f6-67fc6c925ed1\") " pod="openstack/placement-db-sync-n9mqs" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.195161 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a0238f4-33e8-4123-9395-a266d5dce8e2-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-fd2h7\" (UID: \"3a0238f4-33e8-4123-9395-a266d5dce8e2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-fd2h7" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.195200 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da5e4277-78a0-4eca-b9f6-67fc6c925ed1-scripts\") pod \"placement-db-sync-n9mqs\" (UID: \"da5e4277-78a0-4eca-b9f6-67fc6c925ed1\") " pod="openstack/placement-db-sync-n9mqs" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.195235 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da5e4277-78a0-4eca-b9f6-67fc6c925ed1-combined-ca-bundle\") pod \"placement-db-sync-n9mqs\" (UID: \"da5e4277-78a0-4eca-b9f6-67fc6c925ed1\") " pod="openstack/placement-db-sync-n9mqs" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.195252 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a0238f4-33e8-4123-9395-a266d5dce8e2-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-fd2h7\" (UID: \"3a0238f4-33e8-4123-9395-a266d5dce8e2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-fd2h7" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.195280 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/04c2a959-3818-40b2-8182-7fa3287ee0df-config\") pod \"neutron-db-sync-fr28q\" (UID: \"04c2a959-3818-40b2-8182-7fa3287ee0df\") " pod="openstack/neutron-db-sync-fr28q" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.195320 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpvl4\" (UniqueName: \"kubernetes.io/projected/da5e4277-78a0-4eca-b9f6-67fc6c925ed1-kube-api-access-mpvl4\") pod \"placement-db-sync-n9mqs\" (UID: \"da5e4277-78a0-4eca-b9f6-67fc6c925ed1\") " pod="openstack/placement-db-sync-n9mqs" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.198062 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da5e4277-78a0-4eca-b9f6-67fc6c925ed1-logs\") pod \"placement-db-sync-n9mqs\" (UID: \"da5e4277-78a0-4eca-b9f6-67fc6c925ed1\") " pod="openstack/placement-db-sync-n9mqs" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.198842 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5be3dfc-61e4-495c-8b0b-22f417664a9c-db-sync-config-data\") pod \"barbican-db-sync-kzf4n\" (UID: \"d5be3dfc-61e4-495c-8b0b-22f417664a9c\") " pod="openstack/barbican-db-sync-kzf4n" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.201256 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da5e4277-78a0-4eca-b9f6-67fc6c925ed1-config-data\") pod \"placement-db-sync-n9mqs\" (UID: \"da5e4277-78a0-4eca-b9f6-67fc6c925ed1\") " pod="openstack/placement-db-sync-n9mqs" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.201798 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/04c2a959-3818-40b2-8182-7fa3287ee0df-config\") pod \"neutron-db-sync-fr28q\" (UID: \"04c2a959-3818-40b2-8182-7fa3287ee0df\") " pod="openstack/neutron-db-sync-fr28q" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.202299 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04c2a959-3818-40b2-8182-7fa3287ee0df-combined-ca-bundle\") pod \"neutron-db-sync-fr28q\" (UID: \"04c2a959-3818-40b2-8182-7fa3287ee0df\") " pod="openstack/neutron-db-sync-fr28q" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.203988 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5be3dfc-61e4-495c-8b0b-22f417664a9c-combined-ca-bundle\") pod \"barbican-db-sync-kzf4n\" (UID: \"d5be3dfc-61e4-495c-8b0b-22f417664a9c\") " pod="openstack/barbican-db-sync-kzf4n" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.204108 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da5e4277-78a0-4eca-b9f6-67fc6c925ed1-combined-ca-bundle\") pod \"placement-db-sync-n9mqs\" (UID: \"da5e4277-78a0-4eca-b9f6-67fc6c925ed1\") " pod="openstack/placement-db-sync-n9mqs" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.210030 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da5e4277-78a0-4eca-b9f6-67fc6c925ed1-scripts\") pod \"placement-db-sync-n9mqs\" (UID: \"da5e4277-78a0-4eca-b9f6-67fc6c925ed1\") " pod="openstack/placement-db-sync-n9mqs" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.211849 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hlzk\" (UniqueName: \"kubernetes.io/projected/d5be3dfc-61e4-495c-8b0b-22f417664a9c-kube-api-access-4hlzk\") pod \"barbican-db-sync-kzf4n\" (UID: \"d5be3dfc-61e4-495c-8b0b-22f417664a9c\") " pod="openstack/barbican-db-sync-kzf4n" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.212093 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2z5g\" (UniqueName: \"kubernetes.io/projected/04c2a959-3818-40b2-8182-7fa3287ee0df-kube-api-access-w2z5g\") pod \"neutron-db-sync-fr28q\" (UID: \"04c2a959-3818-40b2-8182-7fa3287ee0df\") " pod="openstack/neutron-db-sync-fr28q" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.213607 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpvl4\" (UniqueName: \"kubernetes.io/projected/da5e4277-78a0-4eca-b9f6-67fc6c925ed1-kube-api-access-mpvl4\") pod \"placement-db-sync-n9mqs\" (UID: \"da5e4277-78a0-4eca-b9f6-67fc6c925ed1\") " pod="openstack/placement-db-sync-n9mqs" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.214001 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.272072 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2qddb" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.282680 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fr28q" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.296452 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a0238f4-33e8-4123-9395-a266d5dce8e2-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-fd2h7\" (UID: \"3a0238f4-33e8-4123-9395-a266d5dce8e2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-fd2h7" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.296513 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm4z9\" (UniqueName: \"kubernetes.io/projected/3a0238f4-33e8-4123-9395-a266d5dce8e2-kube-api-access-jm4z9\") pod \"dnsmasq-dns-58dd9ff6bc-fd2h7\" (UID: \"3a0238f4-33e8-4123-9395-a266d5dce8e2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-fd2h7" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.296574 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a0238f4-33e8-4123-9395-a266d5dce8e2-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-fd2h7\" (UID: \"3a0238f4-33e8-4123-9395-a266d5dce8e2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-fd2h7" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.296629 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a0238f4-33e8-4123-9395-a266d5dce8e2-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-fd2h7\" (UID: \"3a0238f4-33e8-4123-9395-a266d5dce8e2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-fd2h7" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.296689 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a0238f4-33e8-4123-9395-a266d5dce8e2-config\") pod \"dnsmasq-dns-58dd9ff6bc-fd2h7\" (UID: \"3a0238f4-33e8-4123-9395-a266d5dce8e2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-fd2h7" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.296709 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a0238f4-33e8-4123-9395-a266d5dce8e2-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-fd2h7\" (UID: \"3a0238f4-33e8-4123-9395-a266d5dce8e2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-fd2h7" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.297423 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a0238f4-33e8-4123-9395-a266d5dce8e2-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-fd2h7\" (UID: \"3a0238f4-33e8-4123-9395-a266d5dce8e2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-fd2h7" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.297447 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a0238f4-33e8-4123-9395-a266d5dce8e2-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-fd2h7\" (UID: \"3a0238f4-33e8-4123-9395-a266d5dce8e2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-fd2h7" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.298111 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a0238f4-33e8-4123-9395-a266d5dce8e2-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-fd2h7\" (UID: \"3a0238f4-33e8-4123-9395-a266d5dce8e2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-fd2h7" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.298136 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a0238f4-33e8-4123-9395-a266d5dce8e2-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-fd2h7\" (UID: \"3a0238f4-33e8-4123-9395-a266d5dce8e2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-fd2h7" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.298545 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a0238f4-33e8-4123-9395-a266d5dce8e2-config\") pod \"dnsmasq-dns-58dd9ff6bc-fd2h7\" (UID: \"3a0238f4-33e8-4123-9395-a266d5dce8e2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-fd2h7" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.322321 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm4z9\" (UniqueName: \"kubernetes.io/projected/3a0238f4-33e8-4123-9395-a266d5dce8e2-kube-api-access-jm4z9\") pod \"dnsmasq-dns-58dd9ff6bc-fd2h7\" (UID: \"3a0238f4-33e8-4123-9395-a266d5dce8e2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-fd2h7" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.341462 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kzf4n" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.357829 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-n9mqs" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.381569 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-nwqdp"] Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.416282 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-fd2h7" Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.482907 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-r7rb6"] Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.778460 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.867116 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-fr28q"] Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.893819 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-2qddb"] Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.975730 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-fd2h7"] Dec 05 01:31:56 crc kubenswrapper[4990]: I1205 01:31:56.997266 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-kzf4n"] Dec 05 01:31:57 crc kubenswrapper[4990]: I1205 01:31:57.004494 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-n9mqs"] Dec 05 01:31:57 crc kubenswrapper[4990]: I1205 01:31:57.123998 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67d8cea8-3def-4f61-838e-36ffae0c8705","Type":"ContainerStarted","Data":"8319b58a4db1dea607f2727f0af880660646dd60aea4146efebee8d39423a88c"} Dec 05 01:31:57 crc kubenswrapper[4990]: I1205 01:31:57.129517 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r7rb6" event={"ID":"defcff17-042a-43c3-a963-3f3667abd371","Type":"ContainerStarted","Data":"276d846fcb3add4feecfd49dda876e7767dd3082e126a6095d86ceb7aaba3127"} Dec 05 01:31:57 crc kubenswrapper[4990]: I1205 01:31:57.129587 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r7rb6" event={"ID":"defcff17-042a-43c3-a963-3f3667abd371","Type":"ContainerStarted","Data":"0819b3f605de3a632e2a131db8f6f1552c8353f212ec43fb28ed8507d16e45f1"} Dec 05 01:31:57 crc kubenswrapper[4990]: I1205 01:31:57.134823 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fr28q" event={"ID":"04c2a959-3818-40b2-8182-7fa3287ee0df","Type":"ContainerStarted","Data":"750d005d420f178223f660e800357b879d55b21927ed874cb1fcdd7f487e051e"} Dec 05 01:31:57 crc kubenswrapper[4990]: I1205 01:31:57.137563 4990 generic.go:334] "Generic (PLEG): container finished" podID="d454c127-1f92-438f-a5a7-475694c13887" containerID="18fc41f630c14b3d3823bfab49610b5ff9e4d40bc5714679da89b4d08578714e" exitCode=0 Dec 05 01:31:57 crc kubenswrapper[4990]: I1205 01:31:57.137622 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-nwqdp" event={"ID":"d454c127-1f92-438f-a5a7-475694c13887","Type":"ContainerDied","Data":"18fc41f630c14b3d3823bfab49610b5ff9e4d40bc5714679da89b4d08578714e"} Dec 05 01:31:57 crc kubenswrapper[4990]: I1205 01:31:57.137644 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-nwqdp" event={"ID":"d454c127-1f92-438f-a5a7-475694c13887","Type":"ContainerStarted","Data":"6bd03232c69a0569474d9d983ab2a08611659e755fbd0ad0bb21861b1fd2ba76"} Dec 05 01:31:57 crc kubenswrapper[4990]: I1205 01:31:57.139314 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2qddb" event={"ID":"1cf00e7d-d396-4719-b077-bd14781d8836","Type":"ContainerStarted","Data":"134274b6e147d54c79e9160d421c2a97a68b77944b8dc07b518ecdd7ec157280"} Dec 05 01:31:57 crc kubenswrapper[4990]: I1205 01:31:57.140800 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-fd2h7" event={"ID":"3a0238f4-33e8-4123-9395-a266d5dce8e2","Type":"ContainerStarted","Data":"b1a9a32842e9617d1c4feb4d87d1dd801ab576404e801c1056256f580dcc0a9d"} Dec 05 01:31:57 crc kubenswrapper[4990]: I1205 01:31:57.144729 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-n9mqs" event={"ID":"da5e4277-78a0-4eca-b9f6-67fc6c925ed1","Type":"ContainerStarted","Data":"b5b996ae5aeed152b3b20ee1b7a7c4f625bcdf6c864253259b6e24a0b54ea650"} Dec 05 01:31:57 crc kubenswrapper[4990]: I1205 01:31:57.152817 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-r7rb6" podStartSLOduration=2.15278287 podStartE2EDuration="2.15278287s" podCreationTimestamp="2025-12-05 01:31:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:31:57.145001349 +0000 UTC m=+1415.521216720" watchObservedRunningTime="2025-12-05 01:31:57.15278287 +0000 UTC m=+1415.528998231" Dec 05 01:31:57 crc kubenswrapper[4990]: I1205 01:31:57.153833 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kzf4n" event={"ID":"d5be3dfc-61e4-495c-8b0b-22f417664a9c","Type":"ContainerStarted","Data":"a8863e20559e14da0748ec57aaaa9174b9cb8bac8de2f651cfacc4189a87f18d"} Dec 05 01:31:57 crc kubenswrapper[4990]: I1205 01:31:57.366750 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-nwqdp" Dec 05 01:31:57 crc kubenswrapper[4990]: I1205 01:31:57.529417 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d454c127-1f92-438f-a5a7-475694c13887-ovsdbserver-sb\") pod \"d454c127-1f92-438f-a5a7-475694c13887\" (UID: \"d454c127-1f92-438f-a5a7-475694c13887\") " Dec 05 01:31:57 crc kubenswrapper[4990]: I1205 01:31:57.529520 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d454c127-1f92-438f-a5a7-475694c13887-config\") pod \"d454c127-1f92-438f-a5a7-475694c13887\" (UID: \"d454c127-1f92-438f-a5a7-475694c13887\") " Dec 05 01:31:57 crc kubenswrapper[4990]: I1205 01:31:57.529551 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d454c127-1f92-438f-a5a7-475694c13887-dns-swift-storage-0\") pod \"d454c127-1f92-438f-a5a7-475694c13887\" (UID: \"d454c127-1f92-438f-a5a7-475694c13887\") " Dec 05 01:31:57 crc kubenswrapper[4990]: I1205 01:31:57.529633 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d454c127-1f92-438f-a5a7-475694c13887-dns-svc\") pod \"d454c127-1f92-438f-a5a7-475694c13887\" (UID: \"d454c127-1f92-438f-a5a7-475694c13887\") " Dec 05 01:31:57 crc kubenswrapper[4990]: I1205 01:31:57.529700 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dd7tv\" (UniqueName: \"kubernetes.io/projected/d454c127-1f92-438f-a5a7-475694c13887-kube-api-access-dd7tv\") pod \"d454c127-1f92-438f-a5a7-475694c13887\" (UID: \"d454c127-1f92-438f-a5a7-475694c13887\") " Dec 05 01:31:57 crc kubenswrapper[4990]: I1205 01:31:57.529813 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d454c127-1f92-438f-a5a7-475694c13887-ovsdbserver-nb\") pod \"d454c127-1f92-438f-a5a7-475694c13887\" (UID: \"d454c127-1f92-438f-a5a7-475694c13887\") " Dec 05 01:31:57 crc kubenswrapper[4990]: I1205 01:31:57.534825 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d454c127-1f92-438f-a5a7-475694c13887-kube-api-access-dd7tv" (OuterVolumeSpecName: "kube-api-access-dd7tv") pod "d454c127-1f92-438f-a5a7-475694c13887" (UID: "d454c127-1f92-438f-a5a7-475694c13887"). InnerVolumeSpecName "kube-api-access-dd7tv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:31:57 crc kubenswrapper[4990]: I1205 01:31:57.558325 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d454c127-1f92-438f-a5a7-475694c13887-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d454c127-1f92-438f-a5a7-475694c13887" (UID: "d454c127-1f92-438f-a5a7-475694c13887"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:57 crc kubenswrapper[4990]: I1205 01:31:57.558355 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d454c127-1f92-438f-a5a7-475694c13887-config" (OuterVolumeSpecName: "config") pod "d454c127-1f92-438f-a5a7-475694c13887" (UID: "d454c127-1f92-438f-a5a7-475694c13887"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:57 crc kubenswrapper[4990]: I1205 01:31:57.560903 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d454c127-1f92-438f-a5a7-475694c13887-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d454c127-1f92-438f-a5a7-475694c13887" (UID: "d454c127-1f92-438f-a5a7-475694c13887"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:57 crc kubenswrapper[4990]: I1205 01:31:57.593654 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d454c127-1f92-438f-a5a7-475694c13887-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d454c127-1f92-438f-a5a7-475694c13887" (UID: "d454c127-1f92-438f-a5a7-475694c13887"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:57 crc kubenswrapper[4990]: I1205 01:31:57.594073 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d454c127-1f92-438f-a5a7-475694c13887-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d454c127-1f92-438f-a5a7-475694c13887" (UID: "d454c127-1f92-438f-a5a7-475694c13887"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:31:57 crc kubenswrapper[4990]: I1205 01:31:57.634953 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dd7tv\" (UniqueName: \"kubernetes.io/projected/d454c127-1f92-438f-a5a7-475694c13887-kube-api-access-dd7tv\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:57 crc kubenswrapper[4990]: I1205 01:31:57.634990 4990 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d454c127-1f92-438f-a5a7-475694c13887-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:57 crc kubenswrapper[4990]: I1205 01:31:57.634999 4990 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d454c127-1f92-438f-a5a7-475694c13887-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:57 crc kubenswrapper[4990]: I1205 01:31:57.635010 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d454c127-1f92-438f-a5a7-475694c13887-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:57 crc kubenswrapper[4990]: I1205 01:31:57.635020 4990 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d454c127-1f92-438f-a5a7-475694c13887-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:57 crc kubenswrapper[4990]: I1205 01:31:57.635028 4990 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d454c127-1f92-438f-a5a7-475694c13887-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 01:31:58 crc kubenswrapper[4990]: I1205 01:31:58.167188 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fr28q" event={"ID":"04c2a959-3818-40b2-8182-7fa3287ee0df","Type":"ContainerStarted","Data":"93c1c5b43e3e314a2c0392ec6bff92ab1b0d03d6e4c4d5249b0654ad3fee74c1"} Dec 05 01:31:58 crc kubenswrapper[4990]: I1205 01:31:58.168909 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-nwqdp" event={"ID":"d454c127-1f92-438f-a5a7-475694c13887","Type":"ContainerDied","Data":"6bd03232c69a0569474d9d983ab2a08611659e755fbd0ad0bb21861b1fd2ba76"} Dec 05 01:31:58 crc kubenswrapper[4990]: I1205 01:31:58.168937 4990 scope.go:117] "RemoveContainer" containerID="18fc41f630c14b3d3823bfab49610b5ff9e4d40bc5714679da89b4d08578714e" Dec 05 01:31:58 crc kubenswrapper[4990]: I1205 01:31:58.169047 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-nwqdp" Dec 05 01:31:58 crc kubenswrapper[4990]: I1205 01:31:58.175700 4990 generic.go:334] "Generic (PLEG): container finished" podID="3a0238f4-33e8-4123-9395-a266d5dce8e2" containerID="5ac8b1216ca4f2e9c8a17a9c804a21e79244aef9049362cc3f6228bdca4779d9" exitCode=0 Dec 05 01:31:58 crc kubenswrapper[4990]: I1205 01:31:58.175823 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-fd2h7" event={"ID":"3a0238f4-33e8-4123-9395-a266d5dce8e2","Type":"ContainerDied","Data":"5ac8b1216ca4f2e9c8a17a9c804a21e79244aef9049362cc3f6228bdca4779d9"} Dec 05 01:31:58 crc kubenswrapper[4990]: I1205 01:31:58.202841 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-fr28q" podStartSLOduration=3.202822394 podStartE2EDuration="3.202822394s" podCreationTimestamp="2025-12-05 01:31:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:31:58.191893784 +0000 UTC m=+1416.568109145" watchObservedRunningTime="2025-12-05 01:31:58.202822394 +0000 UTC m=+1416.579037755" Dec 05 01:31:58 crc kubenswrapper[4990]: I1205 01:31:58.251929 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-nwqdp"] Dec 05 01:31:58 crc kubenswrapper[4990]: I1205 01:31:58.260197 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-nwqdp"] Dec 05 01:31:58 crc kubenswrapper[4990]: I1205 01:31:58.595359 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:31:59 crc kubenswrapper[4990]: I1205 01:31:59.189702 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-fd2h7" event={"ID":"3a0238f4-33e8-4123-9395-a266d5dce8e2","Type":"ContainerStarted","Data":"cde0fce916af11e2e2641db3965a2cf8c96e1538fa0f928c93dfc5e42ebc5aae"} Dec 05 01:31:59 crc kubenswrapper[4990]: I1205 01:31:59.190090 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-fd2h7" Dec 05 01:31:59 crc kubenswrapper[4990]: I1205 01:31:59.201895 4990 generic.go:334] "Generic (PLEG): container finished" podID="bfef6189-60ca-4088-97fa-6dc3fb1e1a52" containerID="47cb92293894b144fb1235b48e3276d87fdd7829c486d8d12d6658a335ec3215" exitCode=0 Dec 05 01:31:59 crc kubenswrapper[4990]: I1205 01:31:59.202723 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-56hk7" event={"ID":"bfef6189-60ca-4088-97fa-6dc3fb1e1a52","Type":"ContainerDied","Data":"47cb92293894b144fb1235b48e3276d87fdd7829c486d8d12d6658a335ec3215"} Dec 05 01:31:59 crc kubenswrapper[4990]: I1205 01:31:59.222311 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-fd2h7" podStartSLOduration=4.22229736 podStartE2EDuration="4.22229736s" podCreationTimestamp="2025-12-05 01:31:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:31:59.217752511 +0000 UTC m=+1417.593967872" watchObservedRunningTime="2025-12-05 01:31:59.22229736 +0000 UTC m=+1417.598512721" Dec 05 01:31:59 crc kubenswrapper[4990]: I1205 01:31:59.958098 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d454c127-1f92-438f-a5a7-475694c13887" path="/var/lib/kubelet/pods/d454c127-1f92-438f-a5a7-475694c13887/volumes" Dec 05 01:32:00 crc kubenswrapper[4990]: I1205 01:32:00.216264 4990 generic.go:334] "Generic (PLEG): container finished" podID="defcff17-042a-43c3-a963-3f3667abd371" containerID="276d846fcb3add4feecfd49dda876e7767dd3082e126a6095d86ceb7aaba3127" exitCode=0 Dec 05 01:32:00 crc kubenswrapper[4990]: I1205 01:32:00.217411 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r7rb6" event={"ID":"defcff17-042a-43c3-a963-3f3667abd371","Type":"ContainerDied","Data":"276d846fcb3add4feecfd49dda876e7767dd3082e126a6095d86ceb7aaba3127"} Dec 05 01:32:01 crc kubenswrapper[4990]: I1205 01:32:01.156154 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-56hk7" Dec 05 01:32:01 crc kubenswrapper[4990]: I1205 01:32:01.272925 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-56hk7" Dec 05 01:32:01 crc kubenswrapper[4990]: I1205 01:32:01.273271 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-56hk7" event={"ID":"bfef6189-60ca-4088-97fa-6dc3fb1e1a52","Type":"ContainerDied","Data":"8ebf7667fb35418e886e7e9080863104534da6753877922f2aae0cc9ffbd5616"} Dec 05 01:32:01 crc kubenswrapper[4990]: I1205 01:32:01.273294 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ebf7667fb35418e886e7e9080863104534da6753877922f2aae0cc9ffbd5616" Dec 05 01:32:01 crc kubenswrapper[4990]: I1205 01:32:01.312024 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfef6189-60ca-4088-97fa-6dc3fb1e1a52-combined-ca-bundle\") pod \"bfef6189-60ca-4088-97fa-6dc3fb1e1a52\" (UID: \"bfef6189-60ca-4088-97fa-6dc3fb1e1a52\") " Dec 05 01:32:01 crc kubenswrapper[4990]: I1205 01:32:01.312086 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfef6189-60ca-4088-97fa-6dc3fb1e1a52-config-data\") pod \"bfef6189-60ca-4088-97fa-6dc3fb1e1a52\" (UID: \"bfef6189-60ca-4088-97fa-6dc3fb1e1a52\") " Dec 05 01:32:01 crc kubenswrapper[4990]: I1205 01:32:01.312203 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bfef6189-60ca-4088-97fa-6dc3fb1e1a52-db-sync-config-data\") pod \"bfef6189-60ca-4088-97fa-6dc3fb1e1a52\" (UID: \"bfef6189-60ca-4088-97fa-6dc3fb1e1a52\") " Dec 05 01:32:01 crc kubenswrapper[4990]: I1205 01:32:01.312235 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vfwz\" (UniqueName: \"kubernetes.io/projected/bfef6189-60ca-4088-97fa-6dc3fb1e1a52-kube-api-access-9vfwz\") pod \"bfef6189-60ca-4088-97fa-6dc3fb1e1a52\" (UID: \"bfef6189-60ca-4088-97fa-6dc3fb1e1a52\") " Dec 05 01:32:01 crc kubenswrapper[4990]: I1205 01:32:01.323641 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfef6189-60ca-4088-97fa-6dc3fb1e1a52-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bfef6189-60ca-4088-97fa-6dc3fb1e1a52" (UID: "bfef6189-60ca-4088-97fa-6dc3fb1e1a52"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:01 crc kubenswrapper[4990]: I1205 01:32:01.324731 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfef6189-60ca-4088-97fa-6dc3fb1e1a52-kube-api-access-9vfwz" (OuterVolumeSpecName: "kube-api-access-9vfwz") pod "bfef6189-60ca-4088-97fa-6dc3fb1e1a52" (UID: "bfef6189-60ca-4088-97fa-6dc3fb1e1a52"). InnerVolumeSpecName "kube-api-access-9vfwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:32:01 crc kubenswrapper[4990]: I1205 01:32:01.386789 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfef6189-60ca-4088-97fa-6dc3fb1e1a52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bfef6189-60ca-4088-97fa-6dc3fb1e1a52" (UID: "bfef6189-60ca-4088-97fa-6dc3fb1e1a52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:01 crc kubenswrapper[4990]: I1205 01:32:01.419822 4990 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bfef6189-60ca-4088-97fa-6dc3fb1e1a52-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:01 crc kubenswrapper[4990]: I1205 01:32:01.419856 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vfwz\" (UniqueName: \"kubernetes.io/projected/bfef6189-60ca-4088-97fa-6dc3fb1e1a52-kube-api-access-9vfwz\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:01 crc kubenswrapper[4990]: I1205 01:32:01.419868 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfef6189-60ca-4088-97fa-6dc3fb1e1a52-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:01 crc kubenswrapper[4990]: I1205 01:32:01.470178 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfef6189-60ca-4088-97fa-6dc3fb1e1a52-config-data" (OuterVolumeSpecName: "config-data") pod "bfef6189-60ca-4088-97fa-6dc3fb1e1a52" (UID: "bfef6189-60ca-4088-97fa-6dc3fb1e1a52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:01 crc kubenswrapper[4990]: I1205 01:32:01.522305 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfef6189-60ca-4088-97fa-6dc3fb1e1a52-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:01 crc kubenswrapper[4990]: I1205 01:32:01.747146 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-fd2h7"] Dec 05 01:32:01 crc kubenswrapper[4990]: I1205 01:32:01.747346 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-fd2h7" podUID="3a0238f4-33e8-4123-9395-a266d5dce8e2" containerName="dnsmasq-dns" containerID="cri-o://cde0fce916af11e2e2641db3965a2cf8c96e1538fa0f928c93dfc5e42ebc5aae" gracePeriod=10 Dec 05 01:32:01 crc kubenswrapper[4990]: I1205 01:32:01.780502 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-tjw7g"] Dec 05 01:32:01 crc kubenswrapper[4990]: E1205 01:32:01.780875 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfef6189-60ca-4088-97fa-6dc3fb1e1a52" containerName="glance-db-sync" Dec 05 01:32:01 crc kubenswrapper[4990]: I1205 01:32:01.780893 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfef6189-60ca-4088-97fa-6dc3fb1e1a52" containerName="glance-db-sync" Dec 05 01:32:01 crc kubenswrapper[4990]: E1205 01:32:01.780921 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d454c127-1f92-438f-a5a7-475694c13887" containerName="init" Dec 05 01:32:01 crc kubenswrapper[4990]: I1205 01:32:01.780929 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="d454c127-1f92-438f-a5a7-475694c13887" containerName="init" Dec 05 01:32:01 crc kubenswrapper[4990]: I1205 01:32:01.781082 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfef6189-60ca-4088-97fa-6dc3fb1e1a52" containerName="glance-db-sync" Dec 05 01:32:01 crc kubenswrapper[4990]: I1205 01:32:01.781115 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="d454c127-1f92-438f-a5a7-475694c13887" containerName="init" Dec 05 01:32:01 crc kubenswrapper[4990]: I1205 01:32:01.781949 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-tjw7g" Dec 05 01:32:01 crc kubenswrapper[4990]: I1205 01:32:01.799428 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-tjw7g"] Dec 05 01:32:01 crc kubenswrapper[4990]: I1205 01:32:01.937691 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkcv8\" (UniqueName: \"kubernetes.io/projected/c2582b55-f142-43da-9aac-24ccc08026c2-kube-api-access-mkcv8\") pod \"dnsmasq-dns-785d8bcb8c-tjw7g\" (UID: \"c2582b55-f142-43da-9aac-24ccc08026c2\") " pod="openstack/dnsmasq-dns-785d8bcb8c-tjw7g" Dec 05 01:32:01 crc kubenswrapper[4990]: I1205 01:32:01.937744 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2582b55-f142-43da-9aac-24ccc08026c2-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-tjw7g\" (UID: \"c2582b55-f142-43da-9aac-24ccc08026c2\") " pod="openstack/dnsmasq-dns-785d8bcb8c-tjw7g" Dec 05 01:32:01 crc kubenswrapper[4990]: I1205 01:32:01.939604 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c2582b55-f142-43da-9aac-24ccc08026c2-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-tjw7g\" (UID: \"c2582b55-f142-43da-9aac-24ccc08026c2\") " pod="openstack/dnsmasq-dns-785d8bcb8c-tjw7g" Dec 05 01:32:01 crc kubenswrapper[4990]: I1205 01:32:01.939764 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2582b55-f142-43da-9aac-24ccc08026c2-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-tjw7g\" (UID: \"c2582b55-f142-43da-9aac-24ccc08026c2\") " pod="openstack/dnsmasq-dns-785d8bcb8c-tjw7g" Dec 05 01:32:01 crc kubenswrapper[4990]: I1205 01:32:01.939816 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2582b55-f142-43da-9aac-24ccc08026c2-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-tjw7g\" (UID: \"c2582b55-f142-43da-9aac-24ccc08026c2\") " pod="openstack/dnsmasq-dns-785d8bcb8c-tjw7g" Dec 05 01:32:01 crc kubenswrapper[4990]: I1205 01:32:01.939891 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2582b55-f142-43da-9aac-24ccc08026c2-config\") pod \"dnsmasq-dns-785d8bcb8c-tjw7g\" (UID: \"c2582b55-f142-43da-9aac-24ccc08026c2\") " pod="openstack/dnsmasq-dns-785d8bcb8c-tjw7g" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.041517 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c2582b55-f142-43da-9aac-24ccc08026c2-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-tjw7g\" (UID: \"c2582b55-f142-43da-9aac-24ccc08026c2\") " pod="openstack/dnsmasq-dns-785d8bcb8c-tjw7g" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.041593 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2582b55-f142-43da-9aac-24ccc08026c2-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-tjw7g\" (UID: \"c2582b55-f142-43da-9aac-24ccc08026c2\") " pod="openstack/dnsmasq-dns-785d8bcb8c-tjw7g" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.041630 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2582b55-f142-43da-9aac-24ccc08026c2-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-tjw7g\" (UID: \"c2582b55-f142-43da-9aac-24ccc08026c2\") " pod="openstack/dnsmasq-dns-785d8bcb8c-tjw7g" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.041663 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2582b55-f142-43da-9aac-24ccc08026c2-config\") pod \"dnsmasq-dns-785d8bcb8c-tjw7g\" (UID: \"c2582b55-f142-43da-9aac-24ccc08026c2\") " pod="openstack/dnsmasq-dns-785d8bcb8c-tjw7g" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.041701 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkcv8\" (UniqueName: \"kubernetes.io/projected/c2582b55-f142-43da-9aac-24ccc08026c2-kube-api-access-mkcv8\") pod \"dnsmasq-dns-785d8bcb8c-tjw7g\" (UID: \"c2582b55-f142-43da-9aac-24ccc08026c2\") " pod="openstack/dnsmasq-dns-785d8bcb8c-tjw7g" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.041728 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2582b55-f142-43da-9aac-24ccc08026c2-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-tjw7g\" (UID: \"c2582b55-f142-43da-9aac-24ccc08026c2\") " pod="openstack/dnsmasq-dns-785d8bcb8c-tjw7g" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.042662 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2582b55-f142-43da-9aac-24ccc08026c2-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-tjw7g\" (UID: \"c2582b55-f142-43da-9aac-24ccc08026c2\") " pod="openstack/dnsmasq-dns-785d8bcb8c-tjw7g" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.042688 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2582b55-f142-43da-9aac-24ccc08026c2-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-tjw7g\" (UID: \"c2582b55-f142-43da-9aac-24ccc08026c2\") " pod="openstack/dnsmasq-dns-785d8bcb8c-tjw7g" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.043236 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2582b55-f142-43da-9aac-24ccc08026c2-config\") pod \"dnsmasq-dns-785d8bcb8c-tjw7g\" (UID: \"c2582b55-f142-43da-9aac-24ccc08026c2\") " pod="openstack/dnsmasq-dns-785d8bcb8c-tjw7g" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.043278 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c2582b55-f142-43da-9aac-24ccc08026c2-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-tjw7g\" (UID: \"c2582b55-f142-43da-9aac-24ccc08026c2\") " pod="openstack/dnsmasq-dns-785d8bcb8c-tjw7g" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.043475 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2582b55-f142-43da-9aac-24ccc08026c2-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-tjw7g\" (UID: \"c2582b55-f142-43da-9aac-24ccc08026c2\") " pod="openstack/dnsmasq-dns-785d8bcb8c-tjw7g" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.059889 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkcv8\" (UniqueName: \"kubernetes.io/projected/c2582b55-f142-43da-9aac-24ccc08026c2-kube-api-access-mkcv8\") pod \"dnsmasq-dns-785d8bcb8c-tjw7g\" (UID: \"c2582b55-f142-43da-9aac-24ccc08026c2\") " pod="openstack/dnsmasq-dns-785d8bcb8c-tjw7g" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.165957 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-tjw7g" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.281266 4990 generic.go:334] "Generic (PLEG): container finished" podID="3a0238f4-33e8-4123-9395-a266d5dce8e2" containerID="cde0fce916af11e2e2641db3965a2cf8c96e1538fa0f928c93dfc5e42ebc5aae" exitCode=0 Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.281306 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-fd2h7" event={"ID":"3a0238f4-33e8-4123-9395-a266d5dce8e2","Type":"ContainerDied","Data":"cde0fce916af11e2e2641db3965a2cf8c96e1538fa0f928c93dfc5e42ebc5aae"} Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.639756 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.641130 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.645093 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-sdvf8" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.652322 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.656563 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.688193 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.758450 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79abb373-14d5-492a-adf7-fea6ebe9f7ff-scripts\") pod \"glance-default-external-api-0\" (UID: \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.758549 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79abb373-14d5-492a-adf7-fea6ebe9f7ff-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.758756 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.758886 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79abb373-14d5-492a-adf7-fea6ebe9f7ff-logs\") pod \"glance-default-external-api-0\" (UID: \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.759019 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/79abb373-14d5-492a-adf7-fea6ebe9f7ff-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.759057 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79abb373-14d5-492a-adf7-fea6ebe9f7ff-config-data\") pod \"glance-default-external-api-0\" (UID: \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.759209 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnh4f\" (UniqueName: \"kubernetes.io/projected/79abb373-14d5-492a-adf7-fea6ebe9f7ff-kube-api-access-dnh4f\") pod \"glance-default-external-api-0\" (UID: \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.818149 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.820202 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.822434 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.838454 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.860701 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79abb373-14d5-492a-adf7-fea6ebe9f7ff-scripts\") pod \"glance-default-external-api-0\" (UID: \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.860754 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79abb373-14d5-492a-adf7-fea6ebe9f7ff-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.860777 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.860809 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79abb373-14d5-492a-adf7-fea6ebe9f7ff-logs\") pod \"glance-default-external-api-0\" (UID: \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.860847 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/79abb373-14d5-492a-adf7-fea6ebe9f7ff-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.860862 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79abb373-14d5-492a-adf7-fea6ebe9f7ff-config-data\") pod \"glance-default-external-api-0\" (UID: \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.860903 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnh4f\" (UniqueName: \"kubernetes.io/projected/79abb373-14d5-492a-adf7-fea6ebe9f7ff-kube-api-access-dnh4f\") pod \"glance-default-external-api-0\" (UID: \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.861182 4990 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.861739 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79abb373-14d5-492a-adf7-fea6ebe9f7ff-logs\") pod \"glance-default-external-api-0\" (UID: \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.862320 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/79abb373-14d5-492a-adf7-fea6ebe9f7ff-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.865866 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79abb373-14d5-492a-adf7-fea6ebe9f7ff-config-data\") pod \"glance-default-external-api-0\" (UID: \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.866203 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79abb373-14d5-492a-adf7-fea6ebe9f7ff-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.866432 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79abb373-14d5-492a-adf7-fea6ebe9f7ff-scripts\") pod \"glance-default-external-api-0\" (UID: \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.880708 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnh4f\" (UniqueName: \"kubernetes.io/projected/79abb373-14d5-492a-adf7-fea6ebe9f7ff-kube-api-access-dnh4f\") pod \"glance-default-external-api-0\" (UID: \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.895291 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.963260 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/746b8387-f9ba-45f0-9650-ce582383e79e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"746b8387-f9ba-45f0-9650-ce582383e79e\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.963405 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/746b8387-f9ba-45f0-9650-ce582383e79e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"746b8387-f9ba-45f0-9650-ce582383e79e\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.963446 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/746b8387-f9ba-45f0-9650-ce582383e79e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"746b8387-f9ba-45f0-9650-ce582383e79e\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.963550 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"746b8387-f9ba-45f0-9650-ce582383e79e\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.963932 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/746b8387-f9ba-45f0-9650-ce582383e79e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"746b8387-f9ba-45f0-9650-ce582383e79e\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.964437 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb9vx\" (UniqueName: \"kubernetes.io/projected/746b8387-f9ba-45f0-9650-ce582383e79e-kube-api-access-cb9vx\") pod \"glance-default-internal-api-0\" (UID: \"746b8387-f9ba-45f0-9650-ce582383e79e\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.964534 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/746b8387-f9ba-45f0-9650-ce582383e79e-logs\") pod \"glance-default-internal-api-0\" (UID: \"746b8387-f9ba-45f0-9650-ce582383e79e\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:02 crc kubenswrapper[4990]: I1205 01:32:02.975276 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 01:32:03 crc kubenswrapper[4990]: I1205 01:32:03.067448 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb9vx\" (UniqueName: \"kubernetes.io/projected/746b8387-f9ba-45f0-9650-ce582383e79e-kube-api-access-cb9vx\") pod \"glance-default-internal-api-0\" (UID: \"746b8387-f9ba-45f0-9650-ce582383e79e\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:03 crc kubenswrapper[4990]: I1205 01:32:03.067556 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/746b8387-f9ba-45f0-9650-ce582383e79e-logs\") pod \"glance-default-internal-api-0\" (UID: \"746b8387-f9ba-45f0-9650-ce582383e79e\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:03 crc kubenswrapper[4990]: I1205 01:32:03.067617 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/746b8387-f9ba-45f0-9650-ce582383e79e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"746b8387-f9ba-45f0-9650-ce582383e79e\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:03 crc kubenswrapper[4990]: I1205 01:32:03.067705 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/746b8387-f9ba-45f0-9650-ce582383e79e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"746b8387-f9ba-45f0-9650-ce582383e79e\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:03 crc kubenswrapper[4990]: I1205 01:32:03.067732 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/746b8387-f9ba-45f0-9650-ce582383e79e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"746b8387-f9ba-45f0-9650-ce582383e79e\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:03 crc kubenswrapper[4990]: I1205 01:32:03.067772 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"746b8387-f9ba-45f0-9650-ce582383e79e\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:03 crc kubenswrapper[4990]: I1205 01:32:03.067807 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/746b8387-f9ba-45f0-9650-ce582383e79e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"746b8387-f9ba-45f0-9650-ce582383e79e\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:03 crc kubenswrapper[4990]: I1205 01:32:03.072141 4990 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"746b8387-f9ba-45f0-9650-ce582383e79e\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Dec 05 01:32:03 crc kubenswrapper[4990]: I1205 01:32:03.072515 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/746b8387-f9ba-45f0-9650-ce582383e79e-logs\") pod \"glance-default-internal-api-0\" (UID: \"746b8387-f9ba-45f0-9650-ce582383e79e\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:03 crc kubenswrapper[4990]: I1205 01:32:03.072612 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/746b8387-f9ba-45f0-9650-ce582383e79e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"746b8387-f9ba-45f0-9650-ce582383e79e\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:03 crc kubenswrapper[4990]: I1205 01:32:03.074466 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/746b8387-f9ba-45f0-9650-ce582383e79e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"746b8387-f9ba-45f0-9650-ce582383e79e\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:03 crc kubenswrapper[4990]: I1205 01:32:03.079991 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/746b8387-f9ba-45f0-9650-ce582383e79e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"746b8387-f9ba-45f0-9650-ce582383e79e\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:03 crc kubenswrapper[4990]: I1205 01:32:03.080437 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/746b8387-f9ba-45f0-9650-ce582383e79e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"746b8387-f9ba-45f0-9650-ce582383e79e\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:03 crc kubenswrapper[4990]: I1205 01:32:03.102910 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb9vx\" (UniqueName: \"kubernetes.io/projected/746b8387-f9ba-45f0-9650-ce582383e79e-kube-api-access-cb9vx\") pod \"glance-default-internal-api-0\" (UID: \"746b8387-f9ba-45f0-9650-ce582383e79e\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:03 crc kubenswrapper[4990]: I1205 01:32:03.107508 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"746b8387-f9ba-45f0-9650-ce582383e79e\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:03 crc kubenswrapper[4990]: I1205 01:32:03.137138 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 01:32:06 crc kubenswrapper[4990]: I1205 01:32:06.227310 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 01:32:06 crc kubenswrapper[4990]: I1205 01:32:06.338036 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 01:32:06 crc kubenswrapper[4990]: I1205 01:32:06.418591 4990 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-fd2h7" podUID="3a0238f4-33e8-4123-9395-a266d5dce8e2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: connect: connection refused" Dec 05 01:32:07 crc kubenswrapper[4990]: I1205 01:32:07.352301 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r7rb6" event={"ID":"defcff17-042a-43c3-a963-3f3667abd371","Type":"ContainerDied","Data":"0819b3f605de3a632e2a131db8f6f1552c8353f212ec43fb28ed8507d16e45f1"} Dec 05 01:32:07 crc kubenswrapper[4990]: I1205 01:32:07.352611 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0819b3f605de3a632e2a131db8f6f1552c8353f212ec43fb28ed8507d16e45f1" Dec 05 01:32:07 crc kubenswrapper[4990]: I1205 01:32:07.403147 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r7rb6" Dec 05 01:32:07 crc kubenswrapper[4990]: I1205 01:32:07.551819 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/defcff17-042a-43c3-a963-3f3667abd371-fernet-keys\") pod \"defcff17-042a-43c3-a963-3f3667abd371\" (UID: \"defcff17-042a-43c3-a963-3f3667abd371\") " Dec 05 01:32:07 crc kubenswrapper[4990]: I1205 01:32:07.551863 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/defcff17-042a-43c3-a963-3f3667abd371-credential-keys\") pod \"defcff17-042a-43c3-a963-3f3667abd371\" (UID: \"defcff17-042a-43c3-a963-3f3667abd371\") " Dec 05 01:32:07 crc kubenswrapper[4990]: I1205 01:32:07.552010 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/defcff17-042a-43c3-a963-3f3667abd371-config-data\") pod \"defcff17-042a-43c3-a963-3f3667abd371\" (UID: \"defcff17-042a-43c3-a963-3f3667abd371\") " Dec 05 01:32:07 crc kubenswrapper[4990]: I1205 01:32:07.552038 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/defcff17-042a-43c3-a963-3f3667abd371-scripts\") pod \"defcff17-042a-43c3-a963-3f3667abd371\" (UID: \"defcff17-042a-43c3-a963-3f3667abd371\") " Dec 05 01:32:07 crc kubenswrapper[4990]: I1205 01:32:07.552137 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h72g\" (UniqueName: \"kubernetes.io/projected/defcff17-042a-43c3-a963-3f3667abd371-kube-api-access-5h72g\") pod \"defcff17-042a-43c3-a963-3f3667abd371\" (UID: \"defcff17-042a-43c3-a963-3f3667abd371\") " Dec 05 01:32:07 crc kubenswrapper[4990]: I1205 01:32:07.552180 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defcff17-042a-43c3-a963-3f3667abd371-combined-ca-bundle\") pod \"defcff17-042a-43c3-a963-3f3667abd371\" (UID: \"defcff17-042a-43c3-a963-3f3667abd371\") " Dec 05 01:32:07 crc kubenswrapper[4990]: I1205 01:32:07.566035 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/defcff17-042a-43c3-a963-3f3667abd371-scripts" (OuterVolumeSpecName: "scripts") pod "defcff17-042a-43c3-a963-3f3667abd371" (UID: "defcff17-042a-43c3-a963-3f3667abd371"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:07 crc kubenswrapper[4990]: I1205 01:32:07.566069 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/defcff17-042a-43c3-a963-3f3667abd371-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "defcff17-042a-43c3-a963-3f3667abd371" (UID: "defcff17-042a-43c3-a963-3f3667abd371"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:07 crc kubenswrapper[4990]: I1205 01:32:07.566140 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/defcff17-042a-43c3-a963-3f3667abd371-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "defcff17-042a-43c3-a963-3f3667abd371" (UID: "defcff17-042a-43c3-a963-3f3667abd371"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:07 crc kubenswrapper[4990]: I1205 01:32:07.566153 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/defcff17-042a-43c3-a963-3f3667abd371-kube-api-access-5h72g" (OuterVolumeSpecName: "kube-api-access-5h72g") pod "defcff17-042a-43c3-a963-3f3667abd371" (UID: "defcff17-042a-43c3-a963-3f3667abd371"). InnerVolumeSpecName "kube-api-access-5h72g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:32:07 crc kubenswrapper[4990]: I1205 01:32:07.588037 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/defcff17-042a-43c3-a963-3f3667abd371-config-data" (OuterVolumeSpecName: "config-data") pod "defcff17-042a-43c3-a963-3f3667abd371" (UID: "defcff17-042a-43c3-a963-3f3667abd371"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:07 crc kubenswrapper[4990]: I1205 01:32:07.596253 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/defcff17-042a-43c3-a963-3f3667abd371-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "defcff17-042a-43c3-a963-3f3667abd371" (UID: "defcff17-042a-43c3-a963-3f3667abd371"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:07 crc kubenswrapper[4990]: I1205 01:32:07.655722 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5h72g\" (UniqueName: \"kubernetes.io/projected/defcff17-042a-43c3-a963-3f3667abd371-kube-api-access-5h72g\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:07 crc kubenswrapper[4990]: I1205 01:32:07.655780 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defcff17-042a-43c3-a963-3f3667abd371-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:07 crc kubenswrapper[4990]: I1205 01:32:07.655801 4990 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/defcff17-042a-43c3-a963-3f3667abd371-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:07 crc kubenswrapper[4990]: I1205 01:32:07.655819 4990 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/defcff17-042a-43c3-a963-3f3667abd371-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:07 crc kubenswrapper[4990]: I1205 01:32:07.655837 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/defcff17-042a-43c3-a963-3f3667abd371-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:07 crc kubenswrapper[4990]: I1205 01:32:07.655853 4990 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/defcff17-042a-43c3-a963-3f3667abd371-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:08 crc kubenswrapper[4990]: I1205 01:32:08.358674 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r7rb6" Dec 05 01:32:08 crc kubenswrapper[4990]: I1205 01:32:08.489193 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-r7rb6"] Dec 05 01:32:08 crc kubenswrapper[4990]: I1205 01:32:08.496994 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-r7rb6"] Dec 05 01:32:08 crc kubenswrapper[4990]: I1205 01:32:08.593521 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-hr68g"] Dec 05 01:32:08 crc kubenswrapper[4990]: E1205 01:32:08.594074 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="defcff17-042a-43c3-a963-3f3667abd371" containerName="keystone-bootstrap" Dec 05 01:32:08 crc kubenswrapper[4990]: I1205 01:32:08.594097 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="defcff17-042a-43c3-a963-3f3667abd371" containerName="keystone-bootstrap" Dec 05 01:32:08 crc kubenswrapper[4990]: I1205 01:32:08.594408 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="defcff17-042a-43c3-a963-3f3667abd371" containerName="keystone-bootstrap" Dec 05 01:32:08 crc kubenswrapper[4990]: I1205 01:32:08.595355 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hr68g" Dec 05 01:32:08 crc kubenswrapper[4990]: I1205 01:32:08.598877 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 05 01:32:08 crc kubenswrapper[4990]: I1205 01:32:08.599395 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2q9vq" Dec 05 01:32:08 crc kubenswrapper[4990]: I1205 01:32:08.599609 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 05 01:32:08 crc kubenswrapper[4990]: I1205 01:32:08.599826 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 05 01:32:08 crc kubenswrapper[4990]: I1205 01:32:08.600361 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 05 01:32:08 crc kubenswrapper[4990]: I1205 01:32:08.601317 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hr68g"] Dec 05 01:32:08 crc kubenswrapper[4990]: I1205 01:32:08.775776 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-fernet-keys\") pod \"keystone-bootstrap-hr68g\" (UID: \"62a8c0e5-85b3-46e5-8e1f-3939a1eafc14\") " pod="openstack/keystone-bootstrap-hr68g" Dec 05 01:32:08 crc kubenswrapper[4990]: I1205 01:32:08.775899 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-combined-ca-bundle\") pod \"keystone-bootstrap-hr68g\" (UID: \"62a8c0e5-85b3-46e5-8e1f-3939a1eafc14\") " pod="openstack/keystone-bootstrap-hr68g" Dec 05 01:32:08 crc kubenswrapper[4990]: I1205 01:32:08.776143 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-scripts\") pod \"keystone-bootstrap-hr68g\" (UID: \"62a8c0e5-85b3-46e5-8e1f-3939a1eafc14\") " pod="openstack/keystone-bootstrap-hr68g" Dec 05 01:32:08 crc kubenswrapper[4990]: I1205 01:32:08.776213 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-config-data\") pod \"keystone-bootstrap-hr68g\" (UID: \"62a8c0e5-85b3-46e5-8e1f-3939a1eafc14\") " pod="openstack/keystone-bootstrap-hr68g" Dec 05 01:32:08 crc kubenswrapper[4990]: I1205 01:32:08.776275 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmknx\" (UniqueName: \"kubernetes.io/projected/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-kube-api-access-kmknx\") pod \"keystone-bootstrap-hr68g\" (UID: \"62a8c0e5-85b3-46e5-8e1f-3939a1eafc14\") " pod="openstack/keystone-bootstrap-hr68g" Dec 05 01:32:08 crc kubenswrapper[4990]: I1205 01:32:08.776331 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-credential-keys\") pod \"keystone-bootstrap-hr68g\" (UID: \"62a8c0e5-85b3-46e5-8e1f-3939a1eafc14\") " pod="openstack/keystone-bootstrap-hr68g" Dec 05 01:32:08 crc kubenswrapper[4990]: I1205 01:32:08.879112 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-scripts\") pod \"keystone-bootstrap-hr68g\" (UID: \"62a8c0e5-85b3-46e5-8e1f-3939a1eafc14\") " pod="openstack/keystone-bootstrap-hr68g" Dec 05 01:32:08 crc kubenswrapper[4990]: I1205 01:32:08.879178 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmknx\" (UniqueName: \"kubernetes.io/projected/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-kube-api-access-kmknx\") pod \"keystone-bootstrap-hr68g\" (UID: \"62a8c0e5-85b3-46e5-8e1f-3939a1eafc14\") " pod="openstack/keystone-bootstrap-hr68g" Dec 05 01:32:08 crc kubenswrapper[4990]: I1205 01:32:08.879201 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-config-data\") pod \"keystone-bootstrap-hr68g\" (UID: \"62a8c0e5-85b3-46e5-8e1f-3939a1eafc14\") " pod="openstack/keystone-bootstrap-hr68g" Dec 05 01:32:08 crc kubenswrapper[4990]: I1205 01:32:08.879224 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-credential-keys\") pod \"keystone-bootstrap-hr68g\" (UID: \"62a8c0e5-85b3-46e5-8e1f-3939a1eafc14\") " pod="openstack/keystone-bootstrap-hr68g" Dec 05 01:32:08 crc kubenswrapper[4990]: I1205 01:32:08.879314 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-fernet-keys\") pod \"keystone-bootstrap-hr68g\" (UID: \"62a8c0e5-85b3-46e5-8e1f-3939a1eafc14\") " pod="openstack/keystone-bootstrap-hr68g" Dec 05 01:32:08 crc kubenswrapper[4990]: I1205 01:32:08.879406 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-combined-ca-bundle\") pod \"keystone-bootstrap-hr68g\" (UID: \"62a8c0e5-85b3-46e5-8e1f-3939a1eafc14\") " pod="openstack/keystone-bootstrap-hr68g" Dec 05 01:32:08 crc kubenswrapper[4990]: I1205 01:32:08.885329 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-config-data\") pod \"keystone-bootstrap-hr68g\" (UID: \"62a8c0e5-85b3-46e5-8e1f-3939a1eafc14\") " pod="openstack/keystone-bootstrap-hr68g" Dec 05 01:32:08 crc kubenswrapper[4990]: I1205 01:32:08.886048 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-combined-ca-bundle\") pod \"keystone-bootstrap-hr68g\" (UID: \"62a8c0e5-85b3-46e5-8e1f-3939a1eafc14\") " pod="openstack/keystone-bootstrap-hr68g" Dec 05 01:32:08 crc kubenswrapper[4990]: I1205 01:32:08.886547 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-credential-keys\") pod \"keystone-bootstrap-hr68g\" (UID: \"62a8c0e5-85b3-46e5-8e1f-3939a1eafc14\") " pod="openstack/keystone-bootstrap-hr68g" Dec 05 01:32:08 crc kubenswrapper[4990]: I1205 01:32:08.888398 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-scripts\") pod \"keystone-bootstrap-hr68g\" (UID: \"62a8c0e5-85b3-46e5-8e1f-3939a1eafc14\") " pod="openstack/keystone-bootstrap-hr68g" Dec 05 01:32:08 crc kubenswrapper[4990]: I1205 01:32:08.889597 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-fernet-keys\") pod \"keystone-bootstrap-hr68g\" (UID: \"62a8c0e5-85b3-46e5-8e1f-3939a1eafc14\") " pod="openstack/keystone-bootstrap-hr68g" Dec 05 01:32:08 crc kubenswrapper[4990]: I1205 01:32:08.896971 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmknx\" (UniqueName: \"kubernetes.io/projected/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-kube-api-access-kmknx\") pod \"keystone-bootstrap-hr68g\" (UID: \"62a8c0e5-85b3-46e5-8e1f-3939a1eafc14\") " pod="openstack/keystone-bootstrap-hr68g" Dec 05 01:32:08 crc kubenswrapper[4990]: I1205 01:32:08.933330 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hr68g" Dec 05 01:32:09 crc kubenswrapper[4990]: I1205 01:32:09.952509 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="defcff17-042a-43c3-a963-3f3667abd371" path="/var/lib/kubelet/pods/defcff17-042a-43c3-a963-3f3667abd371/volumes" Dec 05 01:32:15 crc kubenswrapper[4990]: E1205 01:32:15.218278 4990 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Dec 05 01:32:15 crc kubenswrapper[4990]: E1205 01:32:15.218867 4990 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4hlzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-kzf4n_openstack(d5be3dfc-61e4-495c-8b0b-22f417664a9c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 01:32:15 crc kubenswrapper[4990]: E1205 01:32:15.220094 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-kzf4n" podUID="d5be3dfc-61e4-495c-8b0b-22f417664a9c" Dec 05 01:32:15 crc kubenswrapper[4990]: E1205 01:32:15.423831 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-kzf4n" podUID="d5be3dfc-61e4-495c-8b0b-22f417664a9c" Dec 05 01:32:16 crc kubenswrapper[4990]: E1205 01:32:16.231711 4990 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Dec 05 01:32:16 crc kubenswrapper[4990]: E1205 01:32:16.232137 4990 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ddlqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-2qddb_openstack(1cf00e7d-d396-4719-b077-bd14781d8836): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 01:32:16 crc kubenswrapper[4990]: E1205 01:32:16.233475 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-2qddb" podUID="1cf00e7d-d396-4719-b077-bd14781d8836" Dec 05 01:32:16 crc kubenswrapper[4990]: I1205 01:32:16.316352 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-fd2h7" Dec 05 01:32:16 crc kubenswrapper[4990]: I1205 01:32:16.417904 4990 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-fd2h7" podUID="3a0238f4-33e8-4123-9395-a266d5dce8e2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: i/o timeout" Dec 05 01:32:16 crc kubenswrapper[4990]: I1205 01:32:16.432804 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-fd2h7" Dec 05 01:32:16 crc kubenswrapper[4990]: I1205 01:32:16.432799 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-fd2h7" event={"ID":"3a0238f4-33e8-4123-9395-a266d5dce8e2","Type":"ContainerDied","Data":"b1a9a32842e9617d1c4feb4d87d1dd801ab576404e801c1056256f580dcc0a9d"} Dec 05 01:32:16 crc kubenswrapper[4990]: I1205 01:32:16.433090 4990 scope.go:117] "RemoveContainer" containerID="cde0fce916af11e2e2641db3965a2cf8c96e1538fa0f928c93dfc5e42ebc5aae" Dec 05 01:32:16 crc kubenswrapper[4990]: E1205 01:32:16.434131 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-2qddb" podUID="1cf00e7d-d396-4719-b077-bd14781d8836" Dec 05 01:32:16 crc kubenswrapper[4990]: I1205 01:32:16.436861 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm4z9\" (UniqueName: \"kubernetes.io/projected/3a0238f4-33e8-4123-9395-a266d5dce8e2-kube-api-access-jm4z9\") pod \"3a0238f4-33e8-4123-9395-a266d5dce8e2\" (UID: \"3a0238f4-33e8-4123-9395-a266d5dce8e2\") " Dec 05 01:32:16 crc kubenswrapper[4990]: I1205 01:32:16.437087 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a0238f4-33e8-4123-9395-a266d5dce8e2-dns-svc\") pod \"3a0238f4-33e8-4123-9395-a266d5dce8e2\" (UID: \"3a0238f4-33e8-4123-9395-a266d5dce8e2\") " Dec 05 01:32:16 crc kubenswrapper[4990]: I1205 01:32:16.437295 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a0238f4-33e8-4123-9395-a266d5dce8e2-config\") pod \"3a0238f4-33e8-4123-9395-a266d5dce8e2\" (UID: \"3a0238f4-33e8-4123-9395-a266d5dce8e2\") " Dec 05 01:32:16 crc kubenswrapper[4990]: I1205 01:32:16.437567 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a0238f4-33e8-4123-9395-a266d5dce8e2-ovsdbserver-sb\") pod \"3a0238f4-33e8-4123-9395-a266d5dce8e2\" (UID: \"3a0238f4-33e8-4123-9395-a266d5dce8e2\") " Dec 05 01:32:16 crc kubenswrapper[4990]: I1205 01:32:16.437716 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a0238f4-33e8-4123-9395-a266d5dce8e2-dns-swift-storage-0\") pod \"3a0238f4-33e8-4123-9395-a266d5dce8e2\" (UID: \"3a0238f4-33e8-4123-9395-a266d5dce8e2\") " Dec 05 01:32:16 crc kubenswrapper[4990]: I1205 01:32:16.437828 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a0238f4-33e8-4123-9395-a266d5dce8e2-ovsdbserver-nb\") pod \"3a0238f4-33e8-4123-9395-a266d5dce8e2\" (UID: \"3a0238f4-33e8-4123-9395-a266d5dce8e2\") " Dec 05 01:32:16 crc kubenswrapper[4990]: I1205 01:32:16.441908 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a0238f4-33e8-4123-9395-a266d5dce8e2-kube-api-access-jm4z9" (OuterVolumeSpecName: "kube-api-access-jm4z9") pod "3a0238f4-33e8-4123-9395-a266d5dce8e2" (UID: "3a0238f4-33e8-4123-9395-a266d5dce8e2"). InnerVolumeSpecName "kube-api-access-jm4z9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:32:16 crc kubenswrapper[4990]: I1205 01:32:16.510190 4990 scope.go:117] "RemoveContainer" containerID="5ac8b1216ca4f2e9c8a17a9c804a21e79244aef9049362cc3f6228bdca4779d9" Dec 05 01:32:16 crc kubenswrapper[4990]: I1205 01:32:16.510678 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a0238f4-33e8-4123-9395-a266d5dce8e2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3a0238f4-33e8-4123-9395-a266d5dce8e2" (UID: "3a0238f4-33e8-4123-9395-a266d5dce8e2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:32:16 crc kubenswrapper[4990]: I1205 01:32:16.537463 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a0238f4-33e8-4123-9395-a266d5dce8e2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3a0238f4-33e8-4123-9395-a266d5dce8e2" (UID: "3a0238f4-33e8-4123-9395-a266d5dce8e2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:32:16 crc kubenswrapper[4990]: I1205 01:32:16.541251 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jm4z9\" (UniqueName: \"kubernetes.io/projected/3a0238f4-33e8-4123-9395-a266d5dce8e2-kube-api-access-jm4z9\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:16 crc kubenswrapper[4990]: I1205 01:32:16.541293 4990 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a0238f4-33e8-4123-9395-a266d5dce8e2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:16 crc kubenswrapper[4990]: I1205 01:32:16.541304 4990 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a0238f4-33e8-4123-9395-a266d5dce8e2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:16 crc kubenswrapper[4990]: I1205 01:32:16.542344 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a0238f4-33e8-4123-9395-a266d5dce8e2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3a0238f4-33e8-4123-9395-a266d5dce8e2" (UID: "3a0238f4-33e8-4123-9395-a266d5dce8e2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:32:16 crc kubenswrapper[4990]: I1205 01:32:16.544418 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a0238f4-33e8-4123-9395-a266d5dce8e2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3a0238f4-33e8-4123-9395-a266d5dce8e2" (UID: "3a0238f4-33e8-4123-9395-a266d5dce8e2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:32:16 crc kubenswrapper[4990]: I1205 01:32:16.552009 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a0238f4-33e8-4123-9395-a266d5dce8e2-config" (OuterVolumeSpecName: "config") pod "3a0238f4-33e8-4123-9395-a266d5dce8e2" (UID: "3a0238f4-33e8-4123-9395-a266d5dce8e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:32:16 crc kubenswrapper[4990]: I1205 01:32:16.711575 4990 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a0238f4-33e8-4123-9395-a266d5dce8e2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:16 crc kubenswrapper[4990]: I1205 01:32:16.712259 4990 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a0238f4-33e8-4123-9395-a266d5dce8e2-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:16 crc kubenswrapper[4990]: I1205 01:32:16.712272 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a0238f4-33e8-4123-9395-a266d5dce8e2-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:16 crc kubenswrapper[4990]: I1205 01:32:16.772236 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-tjw7g"] Dec 05 01:32:16 crc kubenswrapper[4990]: I1205 01:32:16.793722 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-fd2h7"] Dec 05 01:32:16 crc kubenswrapper[4990]: I1205 01:32:16.802072 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-fd2h7"] Dec 05 01:32:16 crc kubenswrapper[4990]: I1205 01:32:16.872021 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 01:32:16 crc kubenswrapper[4990]: I1205 01:32:16.885211 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hr68g"] Dec 05 01:32:17 crc kubenswrapper[4990]: I1205 01:32:17.442402 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hr68g" event={"ID":"62a8c0e5-85b3-46e5-8e1f-3939a1eafc14","Type":"ContainerStarted","Data":"3c9dc794e63045f795ad588361662d0beeff2af8e7ed2a579125f1e2b198d8d5"} Dec 05 01:32:17 crc kubenswrapper[4990]: I1205 01:32:17.442453 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hr68g" event={"ID":"62a8c0e5-85b3-46e5-8e1f-3939a1eafc14","Type":"ContainerStarted","Data":"2559b1e803e1faf3d83019464ced1d4c0dc44dc837a9baeffc4e35e0723d1c99"} Dec 05 01:32:17 crc kubenswrapper[4990]: I1205 01:32:17.451495 4990 generic.go:334] "Generic (PLEG): container finished" podID="c2582b55-f142-43da-9aac-24ccc08026c2" containerID="22f8a3eeee8c338694123a2520b6f8852d4c8a9aed7d7e245132cab8ed1c47aa" exitCode=0 Dec 05 01:32:17 crc kubenswrapper[4990]: I1205 01:32:17.451542 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-tjw7g" event={"ID":"c2582b55-f142-43da-9aac-24ccc08026c2","Type":"ContainerDied","Data":"22f8a3eeee8c338694123a2520b6f8852d4c8a9aed7d7e245132cab8ed1c47aa"} Dec 05 01:32:17 crc kubenswrapper[4990]: I1205 01:32:17.451843 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-tjw7g" event={"ID":"c2582b55-f142-43da-9aac-24ccc08026c2","Type":"ContainerStarted","Data":"441b01e2b163f7ae310d26bc742aadf6e7de0ff3d6b6080d096380acf53b76cb"} Dec 05 01:32:17 crc kubenswrapper[4990]: I1205 01:32:17.469433 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-n9mqs" event={"ID":"da5e4277-78a0-4eca-b9f6-67fc6c925ed1","Type":"ContainerStarted","Data":"03f57bdc635d64e1856258014678128b8246e48a437406c4f7954d39ab078ccb"} Dec 05 01:32:17 crc kubenswrapper[4990]: I1205 01:32:17.470252 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-hr68g" podStartSLOduration=9.470241184 podStartE2EDuration="9.470241184s" podCreationTimestamp="2025-12-05 01:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:32:17.459698134 +0000 UTC m=+1435.835913505" watchObservedRunningTime="2025-12-05 01:32:17.470241184 +0000 UTC m=+1435.846456545" Dec 05 01:32:17 crc kubenswrapper[4990]: I1205 01:32:17.497370 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67d8cea8-3def-4f61-838e-36ffae0c8705","Type":"ContainerStarted","Data":"6e85fe47a7a6f9ef3e39d75c22d4f8b3891f3587a46157b30aa2c670468bd3fa"} Dec 05 01:32:17 crc kubenswrapper[4990]: I1205 01:32:17.504651 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-n9mqs" podStartSLOduration=4.307065346 podStartE2EDuration="22.50463761s" podCreationTimestamp="2025-12-05 01:31:55 +0000 UTC" firstStartedPulling="2025-12-05 01:31:57.013835767 +0000 UTC m=+1415.390051128" lastFinishedPulling="2025-12-05 01:32:15.211408031 +0000 UTC m=+1433.587623392" observedRunningTime="2025-12-05 01:32:17.504095364 +0000 UTC m=+1435.880310725" watchObservedRunningTime="2025-12-05 01:32:17.50463761 +0000 UTC m=+1435.880852971" Dec 05 01:32:17 crc kubenswrapper[4990]: I1205 01:32:17.505309 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"79abb373-14d5-492a-adf7-fea6ebe9f7ff","Type":"ContainerStarted","Data":"5eedbb00c8f4adff0d662a87988d537a0bbf3051321d6233662317c3908b4461"} Dec 05 01:32:17 crc kubenswrapper[4990]: I1205 01:32:17.505391 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"79abb373-14d5-492a-adf7-fea6ebe9f7ff","Type":"ContainerStarted","Data":"739f61f865d137e7fb99db2167c40d68613a926deb77a65cdad4225a7094cfaa"} Dec 05 01:32:17 crc kubenswrapper[4990]: I1205 01:32:17.942474 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a0238f4-33e8-4123-9395-a266d5dce8e2" path="/var/lib/kubelet/pods/3a0238f4-33e8-4123-9395-a266d5dce8e2/volumes" Dec 05 01:32:17 crc kubenswrapper[4990]: I1205 01:32:17.943193 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 01:32:18 crc kubenswrapper[4990]: I1205 01:32:18.530003 4990 generic.go:334] "Generic (PLEG): container finished" podID="04c2a959-3818-40b2-8182-7fa3287ee0df" containerID="93c1c5b43e3e314a2c0392ec6bff92ab1b0d03d6e4c4d5249b0654ad3fee74c1" exitCode=0 Dec 05 01:32:18 crc kubenswrapper[4990]: I1205 01:32:18.530041 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fr28q" event={"ID":"04c2a959-3818-40b2-8182-7fa3287ee0df","Type":"ContainerDied","Data":"93c1c5b43e3e314a2c0392ec6bff92ab1b0d03d6e4c4d5249b0654ad3fee74c1"} Dec 05 01:32:18 crc kubenswrapper[4990]: I1205 01:32:18.533045 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"746b8387-f9ba-45f0-9650-ce582383e79e","Type":"ContainerStarted","Data":"6f2665145a2e63cf698f06b1cece8832fc0ff1818788c4d096ebf239b3090280"} Dec 05 01:32:18 crc kubenswrapper[4990]: I1205 01:32:18.533082 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"746b8387-f9ba-45f0-9650-ce582383e79e","Type":"ContainerStarted","Data":"15308ddcc5d253ba8ec0edaea20877c5dcc3373ddd6017d61b7fb2750709c3b9"} Dec 05 01:32:18 crc kubenswrapper[4990]: I1205 01:32:18.535237 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-tjw7g" event={"ID":"c2582b55-f142-43da-9aac-24ccc08026c2","Type":"ContainerStarted","Data":"eb7853ec4615bd3b4f8b1cfe4db92abd0558f4507366bce8aa9b848b7dcc73aa"} Dec 05 01:32:18 crc kubenswrapper[4990]: I1205 01:32:18.535366 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-tjw7g" Dec 05 01:32:18 crc kubenswrapper[4990]: I1205 01:32:18.536725 4990 generic.go:334] "Generic (PLEG): container finished" podID="da5e4277-78a0-4eca-b9f6-67fc6c925ed1" containerID="03f57bdc635d64e1856258014678128b8246e48a437406c4f7954d39ab078ccb" exitCode=0 Dec 05 01:32:18 crc kubenswrapper[4990]: I1205 01:32:18.536778 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-n9mqs" event={"ID":"da5e4277-78a0-4eca-b9f6-67fc6c925ed1","Type":"ContainerDied","Data":"03f57bdc635d64e1856258014678128b8246e48a437406c4f7954d39ab078ccb"} Dec 05 01:32:18 crc kubenswrapper[4990]: I1205 01:32:18.539616 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67d8cea8-3def-4f61-838e-36ffae0c8705","Type":"ContainerStarted","Data":"49e3c4b998337a905dec9be39a02d602a7fbee657480dbb9698096fb943acc37"} Dec 05 01:32:18 crc kubenswrapper[4990]: I1205 01:32:18.543057 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="79abb373-14d5-492a-adf7-fea6ebe9f7ff" containerName="glance-log" containerID="cri-o://5eedbb00c8f4adff0d662a87988d537a0bbf3051321d6233662317c3908b4461" gracePeriod=30 Dec 05 01:32:18 crc kubenswrapper[4990]: I1205 01:32:18.543352 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"79abb373-14d5-492a-adf7-fea6ebe9f7ff","Type":"ContainerStarted","Data":"331fb8650c6eb3380e7fa49dc63fc9752f1982eb72c2ca619df054867ca7ddee"} Dec 05 01:32:18 crc kubenswrapper[4990]: I1205 01:32:18.543427 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="79abb373-14d5-492a-adf7-fea6ebe9f7ff" containerName="glance-httpd" containerID="cri-o://331fb8650c6eb3380e7fa49dc63fc9752f1982eb72c2ca619df054867ca7ddee" gracePeriod=30 Dec 05 01:32:18 crc kubenswrapper[4990]: I1205 01:32:18.579060 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=17.579042765 podStartE2EDuration="17.579042765s" podCreationTimestamp="2025-12-05 01:32:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:32:18.572774057 +0000 UTC m=+1436.948989418" watchObservedRunningTime="2025-12-05 01:32:18.579042765 +0000 UTC m=+1436.955258126" Dec 05 01:32:18 crc kubenswrapper[4990]: I1205 01:32:18.614379 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-tjw7g" podStartSLOduration=17.614360517 podStartE2EDuration="17.614360517s" podCreationTimestamp="2025-12-05 01:32:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:32:18.61127706 +0000 UTC m=+1436.987492421" watchObservedRunningTime="2025-12-05 01:32:18.614360517 +0000 UTC m=+1436.990575888" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.085130 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.266671 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/79abb373-14d5-492a-adf7-fea6ebe9f7ff-httpd-run\") pod \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\" (UID: \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\") " Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.266764 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\" (UID: \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\") " Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.266897 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79abb373-14d5-492a-adf7-fea6ebe9f7ff-scripts\") pod \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\" (UID: \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\") " Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.266963 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnh4f\" (UniqueName: \"kubernetes.io/projected/79abb373-14d5-492a-adf7-fea6ebe9f7ff-kube-api-access-dnh4f\") pod \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\" (UID: \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\") " Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.266999 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79abb373-14d5-492a-adf7-fea6ebe9f7ff-combined-ca-bundle\") pod \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\" (UID: \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\") " Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.267040 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79abb373-14d5-492a-adf7-fea6ebe9f7ff-logs\") pod \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\" (UID: \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\") " Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.267054 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79abb373-14d5-492a-adf7-fea6ebe9f7ff-config-data\") pod \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\" (UID: \"79abb373-14d5-492a-adf7-fea6ebe9f7ff\") " Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.268303 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79abb373-14d5-492a-adf7-fea6ebe9f7ff-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "79abb373-14d5-492a-adf7-fea6ebe9f7ff" (UID: "79abb373-14d5-492a-adf7-fea6ebe9f7ff"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.270001 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79abb373-14d5-492a-adf7-fea6ebe9f7ff-logs" (OuterVolumeSpecName: "logs") pod "79abb373-14d5-492a-adf7-fea6ebe9f7ff" (UID: "79abb373-14d5-492a-adf7-fea6ebe9f7ff"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.272882 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79abb373-14d5-492a-adf7-fea6ebe9f7ff-scripts" (OuterVolumeSpecName: "scripts") pod "79abb373-14d5-492a-adf7-fea6ebe9f7ff" (UID: "79abb373-14d5-492a-adf7-fea6ebe9f7ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.273640 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79abb373-14d5-492a-adf7-fea6ebe9f7ff-kube-api-access-dnh4f" (OuterVolumeSpecName: "kube-api-access-dnh4f") pod "79abb373-14d5-492a-adf7-fea6ebe9f7ff" (UID: "79abb373-14d5-492a-adf7-fea6ebe9f7ff"). InnerVolumeSpecName "kube-api-access-dnh4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.274927 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "79abb373-14d5-492a-adf7-fea6ebe9f7ff" (UID: "79abb373-14d5-492a-adf7-fea6ebe9f7ff"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.297613 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79abb373-14d5-492a-adf7-fea6ebe9f7ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79abb373-14d5-492a-adf7-fea6ebe9f7ff" (UID: "79abb373-14d5-492a-adf7-fea6ebe9f7ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.314261 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79abb373-14d5-492a-adf7-fea6ebe9f7ff-config-data" (OuterVolumeSpecName: "config-data") pod "79abb373-14d5-492a-adf7-fea6ebe9f7ff" (UID: "79abb373-14d5-492a-adf7-fea6ebe9f7ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.368607 4990 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79abb373-14d5-492a-adf7-fea6ebe9f7ff-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.368648 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnh4f\" (UniqueName: \"kubernetes.io/projected/79abb373-14d5-492a-adf7-fea6ebe9f7ff-kube-api-access-dnh4f\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.368662 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79abb373-14d5-492a-adf7-fea6ebe9f7ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.368675 4990 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79abb373-14d5-492a-adf7-fea6ebe9f7ff-logs\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.368694 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79abb373-14d5-492a-adf7-fea6ebe9f7ff-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.368702 4990 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/79abb373-14d5-492a-adf7-fea6ebe9f7ff-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.368731 4990 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.389416 4990 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.469752 4990 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.554862 4990 generic.go:334] "Generic (PLEG): container finished" podID="79abb373-14d5-492a-adf7-fea6ebe9f7ff" containerID="331fb8650c6eb3380e7fa49dc63fc9752f1982eb72c2ca619df054867ca7ddee" exitCode=0 Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.555113 4990 generic.go:334] "Generic (PLEG): container finished" podID="79abb373-14d5-492a-adf7-fea6ebe9f7ff" containerID="5eedbb00c8f4adff0d662a87988d537a0bbf3051321d6233662317c3908b4461" exitCode=143 Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.554953 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.554945 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"79abb373-14d5-492a-adf7-fea6ebe9f7ff","Type":"ContainerDied","Data":"331fb8650c6eb3380e7fa49dc63fc9752f1982eb72c2ca619df054867ca7ddee"} Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.555216 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"79abb373-14d5-492a-adf7-fea6ebe9f7ff","Type":"ContainerDied","Data":"5eedbb00c8f4adff0d662a87988d537a0bbf3051321d6233662317c3908b4461"} Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.555227 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"79abb373-14d5-492a-adf7-fea6ebe9f7ff","Type":"ContainerDied","Data":"739f61f865d137e7fb99db2167c40d68613a926deb77a65cdad4225a7094cfaa"} Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.555244 4990 scope.go:117] "RemoveContainer" containerID="331fb8650c6eb3380e7fa49dc63fc9752f1982eb72c2ca619df054867ca7ddee" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.560690 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"746b8387-f9ba-45f0-9650-ce582383e79e","Type":"ContainerStarted","Data":"b0a5752a25bbddf0ee01f4c6317ef985a2f3b0c5f29bce24226eac731e3da76c"} Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.560737 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="746b8387-f9ba-45f0-9650-ce582383e79e" containerName="glance-log" containerID="cri-o://6f2665145a2e63cf698f06b1cece8832fc0ff1818788c4d096ebf239b3090280" gracePeriod=30 Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.560824 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="746b8387-f9ba-45f0-9650-ce582383e79e" containerName="glance-httpd" containerID="cri-o://b0a5752a25bbddf0ee01f4c6317ef985a2f3b0c5f29bce24226eac731e3da76c" gracePeriod=30 Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.588002 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=18.587982924 podStartE2EDuration="18.587982924s" podCreationTimestamp="2025-12-05 01:32:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:32:19.581388326 +0000 UTC m=+1437.957603687" watchObservedRunningTime="2025-12-05 01:32:19.587982924 +0000 UTC m=+1437.964198275" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.592043 4990 scope.go:117] "RemoveContainer" containerID="5eedbb00c8f4adff0d662a87988d537a0bbf3051321d6233662317c3908b4461" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.612763 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.625709 4990 scope.go:117] "RemoveContainer" containerID="331fb8650c6eb3380e7fa49dc63fc9752f1982eb72c2ca619df054867ca7ddee" Dec 05 01:32:19 crc kubenswrapper[4990]: E1205 01:32:19.626470 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"331fb8650c6eb3380e7fa49dc63fc9752f1982eb72c2ca619df054867ca7ddee\": container with ID starting with 331fb8650c6eb3380e7fa49dc63fc9752f1982eb72c2ca619df054867ca7ddee not found: ID does not exist" containerID="331fb8650c6eb3380e7fa49dc63fc9752f1982eb72c2ca619df054867ca7ddee" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.626530 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"331fb8650c6eb3380e7fa49dc63fc9752f1982eb72c2ca619df054867ca7ddee"} err="failed to get container status \"331fb8650c6eb3380e7fa49dc63fc9752f1982eb72c2ca619df054867ca7ddee\": rpc error: code = NotFound desc = could not find container \"331fb8650c6eb3380e7fa49dc63fc9752f1982eb72c2ca619df054867ca7ddee\": container with ID starting with 331fb8650c6eb3380e7fa49dc63fc9752f1982eb72c2ca619df054867ca7ddee not found: ID does not exist" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.626557 4990 scope.go:117] "RemoveContainer" containerID="5eedbb00c8f4adff0d662a87988d537a0bbf3051321d6233662317c3908b4461" Dec 05 01:32:19 crc kubenswrapper[4990]: E1205 01:32:19.627671 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5eedbb00c8f4adff0d662a87988d537a0bbf3051321d6233662317c3908b4461\": container with ID starting with 5eedbb00c8f4adff0d662a87988d537a0bbf3051321d6233662317c3908b4461 not found: ID does not exist" containerID="5eedbb00c8f4adff0d662a87988d537a0bbf3051321d6233662317c3908b4461" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.627717 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eedbb00c8f4adff0d662a87988d537a0bbf3051321d6233662317c3908b4461"} err="failed to get container status \"5eedbb00c8f4adff0d662a87988d537a0bbf3051321d6233662317c3908b4461\": rpc error: code = NotFound desc = could not find container \"5eedbb00c8f4adff0d662a87988d537a0bbf3051321d6233662317c3908b4461\": container with ID starting with 5eedbb00c8f4adff0d662a87988d537a0bbf3051321d6233662317c3908b4461 not found: ID does not exist" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.627748 4990 scope.go:117] "RemoveContainer" containerID="331fb8650c6eb3380e7fa49dc63fc9752f1982eb72c2ca619df054867ca7ddee" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.628386 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"331fb8650c6eb3380e7fa49dc63fc9752f1982eb72c2ca619df054867ca7ddee"} err="failed to get container status \"331fb8650c6eb3380e7fa49dc63fc9752f1982eb72c2ca619df054867ca7ddee\": rpc error: code = NotFound desc = could not find container \"331fb8650c6eb3380e7fa49dc63fc9752f1982eb72c2ca619df054867ca7ddee\": container with ID starting with 331fb8650c6eb3380e7fa49dc63fc9752f1982eb72c2ca619df054867ca7ddee not found: ID does not exist" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.628418 4990 scope.go:117] "RemoveContainer" containerID="5eedbb00c8f4adff0d662a87988d537a0bbf3051321d6233662317c3908b4461" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.628882 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eedbb00c8f4adff0d662a87988d537a0bbf3051321d6233662317c3908b4461"} err="failed to get container status \"5eedbb00c8f4adff0d662a87988d537a0bbf3051321d6233662317c3908b4461\": rpc error: code = NotFound desc = could not find container \"5eedbb00c8f4adff0d662a87988d537a0bbf3051321d6233662317c3908b4461\": container with ID starting with 5eedbb00c8f4adff0d662a87988d537a0bbf3051321d6233662317c3908b4461 not found: ID does not exist" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.633577 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.647865 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 01:32:19 crc kubenswrapper[4990]: E1205 01:32:19.648308 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a0238f4-33e8-4123-9395-a266d5dce8e2" containerName="init" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.648325 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a0238f4-33e8-4123-9395-a266d5dce8e2" containerName="init" Dec 05 01:32:19 crc kubenswrapper[4990]: E1205 01:32:19.648340 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a0238f4-33e8-4123-9395-a266d5dce8e2" containerName="dnsmasq-dns" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.648348 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a0238f4-33e8-4123-9395-a266d5dce8e2" containerName="dnsmasq-dns" Dec 05 01:32:19 crc kubenswrapper[4990]: E1205 01:32:19.648357 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79abb373-14d5-492a-adf7-fea6ebe9f7ff" containerName="glance-log" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.648365 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="79abb373-14d5-492a-adf7-fea6ebe9f7ff" containerName="glance-log" Dec 05 01:32:19 crc kubenswrapper[4990]: E1205 01:32:19.648397 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79abb373-14d5-492a-adf7-fea6ebe9f7ff" containerName="glance-httpd" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.648404 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="79abb373-14d5-492a-adf7-fea6ebe9f7ff" containerName="glance-httpd" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.648619 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a0238f4-33e8-4123-9395-a266d5dce8e2" containerName="dnsmasq-dns" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.648647 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="79abb373-14d5-492a-adf7-fea6ebe9f7ff" containerName="glance-httpd" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.648663 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="79abb373-14d5-492a-adf7-fea6ebe9f7ff" containerName="glance-log" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.649828 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.652351 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.654930 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.658413 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.776886 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-logs\") pod \"glance-default-external-api-0\" (UID: \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.777004 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.777036 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-scripts\") pod \"glance-default-external-api-0\" (UID: \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.777063 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lvn8\" (UniqueName: \"kubernetes.io/projected/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-kube-api-access-2lvn8\") pod \"glance-default-external-api-0\" (UID: \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.777114 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.777195 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-config-data\") pod \"glance-default-external-api-0\" (UID: \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.777233 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.777283 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.879307 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-config-data\") pod \"glance-default-external-api-0\" (UID: \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.879356 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.879388 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.879418 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-logs\") pod \"glance-default-external-api-0\" (UID: \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.879470 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.879501 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-scripts\") pod \"glance-default-external-api-0\" (UID: \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.879517 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lvn8\" (UniqueName: \"kubernetes.io/projected/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-kube-api-access-2lvn8\") pod \"glance-default-external-api-0\" (UID: \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.879541 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.880614 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-logs\") pod \"glance-default-external-api-0\" (UID: \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.881537 4990 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.883776 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.891065 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.893339 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-scripts\") pod \"glance-default-external-api-0\" (UID: \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.894672 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fr28q" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.903304 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.904099 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-config-data\") pod \"glance-default-external-api-0\" (UID: \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.913275 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lvn8\" (UniqueName: \"kubernetes.io/projected/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-kube-api-access-2lvn8\") pod \"glance-default-external-api-0\" (UID: \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.931282 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\") " pod="openstack/glance-default-external-api-0" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.947841 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79abb373-14d5-492a-adf7-fea6ebe9f7ff" path="/var/lib/kubelet/pods/79abb373-14d5-492a-adf7-fea6ebe9f7ff/volumes" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.978549 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 01:32:19 crc kubenswrapper[4990]: I1205 01:32:19.989852 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-n9mqs" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.081240 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04c2a959-3818-40b2-8182-7fa3287ee0df-combined-ca-bundle\") pod \"04c2a959-3818-40b2-8182-7fa3287ee0df\" (UID: \"04c2a959-3818-40b2-8182-7fa3287ee0df\") " Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.081304 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da5e4277-78a0-4eca-b9f6-67fc6c925ed1-combined-ca-bundle\") pod \"da5e4277-78a0-4eca-b9f6-67fc6c925ed1\" (UID: \"da5e4277-78a0-4eca-b9f6-67fc6c925ed1\") " Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.081741 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2z5g\" (UniqueName: \"kubernetes.io/projected/04c2a959-3818-40b2-8182-7fa3287ee0df-kube-api-access-w2z5g\") pod \"04c2a959-3818-40b2-8182-7fa3287ee0df\" (UID: \"04c2a959-3818-40b2-8182-7fa3287ee0df\") " Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.081791 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da5e4277-78a0-4eca-b9f6-67fc6c925ed1-config-data\") pod \"da5e4277-78a0-4eca-b9f6-67fc6c925ed1\" (UID: \"da5e4277-78a0-4eca-b9f6-67fc6c925ed1\") " Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.081811 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da5e4277-78a0-4eca-b9f6-67fc6c925ed1-logs\") pod \"da5e4277-78a0-4eca-b9f6-67fc6c925ed1\" (UID: \"da5e4277-78a0-4eca-b9f6-67fc6c925ed1\") " Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.081834 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da5e4277-78a0-4eca-b9f6-67fc6c925ed1-scripts\") pod \"da5e4277-78a0-4eca-b9f6-67fc6c925ed1\" (UID: \"da5e4277-78a0-4eca-b9f6-67fc6c925ed1\") " Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.081857 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/04c2a959-3818-40b2-8182-7fa3287ee0df-config\") pod \"04c2a959-3818-40b2-8182-7fa3287ee0df\" (UID: \"04c2a959-3818-40b2-8182-7fa3287ee0df\") " Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.081915 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpvl4\" (UniqueName: \"kubernetes.io/projected/da5e4277-78a0-4eca-b9f6-67fc6c925ed1-kube-api-access-mpvl4\") pod \"da5e4277-78a0-4eca-b9f6-67fc6c925ed1\" (UID: \"da5e4277-78a0-4eca-b9f6-67fc6c925ed1\") " Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.084145 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da5e4277-78a0-4eca-b9f6-67fc6c925ed1-logs" (OuterVolumeSpecName: "logs") pod "da5e4277-78a0-4eca-b9f6-67fc6c925ed1" (UID: "da5e4277-78a0-4eca-b9f6-67fc6c925ed1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.087778 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da5e4277-78a0-4eca-b9f6-67fc6c925ed1-kube-api-access-mpvl4" (OuterVolumeSpecName: "kube-api-access-mpvl4") pod "da5e4277-78a0-4eca-b9f6-67fc6c925ed1" (UID: "da5e4277-78a0-4eca-b9f6-67fc6c925ed1"). InnerVolumeSpecName "kube-api-access-mpvl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.095493 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04c2a959-3818-40b2-8182-7fa3287ee0df-kube-api-access-w2z5g" (OuterVolumeSpecName: "kube-api-access-w2z5g") pod "04c2a959-3818-40b2-8182-7fa3287ee0df" (UID: "04c2a959-3818-40b2-8182-7fa3287ee0df"). InnerVolumeSpecName "kube-api-access-w2z5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.098001 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da5e4277-78a0-4eca-b9f6-67fc6c925ed1-scripts" (OuterVolumeSpecName: "scripts") pod "da5e4277-78a0-4eca-b9f6-67fc6c925ed1" (UID: "da5e4277-78a0-4eca-b9f6-67fc6c925ed1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.130004 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da5e4277-78a0-4eca-b9f6-67fc6c925ed1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da5e4277-78a0-4eca-b9f6-67fc6c925ed1" (UID: "da5e4277-78a0-4eca-b9f6-67fc6c925ed1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.134916 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da5e4277-78a0-4eca-b9f6-67fc6c925ed1-config-data" (OuterVolumeSpecName: "config-data") pod "da5e4277-78a0-4eca-b9f6-67fc6c925ed1" (UID: "da5e4277-78a0-4eca-b9f6-67fc6c925ed1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.135634 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04c2a959-3818-40b2-8182-7fa3287ee0df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04c2a959-3818-40b2-8182-7fa3287ee0df" (UID: "04c2a959-3818-40b2-8182-7fa3287ee0df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.144903 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04c2a959-3818-40b2-8182-7fa3287ee0df-config" (OuterVolumeSpecName: "config") pod "04c2a959-3818-40b2-8182-7fa3287ee0df" (UID: "04c2a959-3818-40b2-8182-7fa3287ee0df"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.187272 4990 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da5e4277-78a0-4eca-b9f6-67fc6c925ed1-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.187337 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/04c2a959-3818-40b2-8182-7fa3287ee0df-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.187348 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpvl4\" (UniqueName: \"kubernetes.io/projected/da5e4277-78a0-4eca-b9f6-67fc6c925ed1-kube-api-access-mpvl4\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.187361 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04c2a959-3818-40b2-8182-7fa3287ee0df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.187374 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da5e4277-78a0-4eca-b9f6-67fc6c925ed1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.187384 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2z5g\" (UniqueName: \"kubernetes.io/projected/04c2a959-3818-40b2-8182-7fa3287ee0df-kube-api-access-w2z5g\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.187414 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da5e4277-78a0-4eca-b9f6-67fc6c925ed1-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.187428 4990 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da5e4277-78a0-4eca-b9f6-67fc6c925ed1-logs\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.189848 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.288867 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/746b8387-f9ba-45f0-9650-ce582383e79e-combined-ca-bundle\") pod \"746b8387-f9ba-45f0-9650-ce582383e79e\" (UID: \"746b8387-f9ba-45f0-9650-ce582383e79e\") " Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.288989 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"746b8387-f9ba-45f0-9650-ce582383e79e\" (UID: \"746b8387-f9ba-45f0-9650-ce582383e79e\") " Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.289041 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb9vx\" (UniqueName: \"kubernetes.io/projected/746b8387-f9ba-45f0-9650-ce582383e79e-kube-api-access-cb9vx\") pod \"746b8387-f9ba-45f0-9650-ce582383e79e\" (UID: \"746b8387-f9ba-45f0-9650-ce582383e79e\") " Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.289097 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/746b8387-f9ba-45f0-9650-ce582383e79e-logs\") pod \"746b8387-f9ba-45f0-9650-ce582383e79e\" (UID: \"746b8387-f9ba-45f0-9650-ce582383e79e\") " Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.289131 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/746b8387-f9ba-45f0-9650-ce582383e79e-httpd-run\") pod \"746b8387-f9ba-45f0-9650-ce582383e79e\" (UID: \"746b8387-f9ba-45f0-9650-ce582383e79e\") " Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.289164 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/746b8387-f9ba-45f0-9650-ce582383e79e-scripts\") pod \"746b8387-f9ba-45f0-9650-ce582383e79e\" (UID: \"746b8387-f9ba-45f0-9650-ce582383e79e\") " Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.289215 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/746b8387-f9ba-45f0-9650-ce582383e79e-config-data\") pod \"746b8387-f9ba-45f0-9650-ce582383e79e\" (UID: \"746b8387-f9ba-45f0-9650-ce582383e79e\") " Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.290642 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/746b8387-f9ba-45f0-9650-ce582383e79e-logs" (OuterVolumeSpecName: "logs") pod "746b8387-f9ba-45f0-9650-ce582383e79e" (UID: "746b8387-f9ba-45f0-9650-ce582383e79e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.290655 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/746b8387-f9ba-45f0-9650-ce582383e79e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "746b8387-f9ba-45f0-9650-ce582383e79e" (UID: "746b8387-f9ba-45f0-9650-ce582383e79e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.293434 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/746b8387-f9ba-45f0-9650-ce582383e79e-scripts" (OuterVolumeSpecName: "scripts") pod "746b8387-f9ba-45f0-9650-ce582383e79e" (UID: "746b8387-f9ba-45f0-9650-ce582383e79e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.293888 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/746b8387-f9ba-45f0-9650-ce582383e79e-kube-api-access-cb9vx" (OuterVolumeSpecName: "kube-api-access-cb9vx") pod "746b8387-f9ba-45f0-9650-ce582383e79e" (UID: "746b8387-f9ba-45f0-9650-ce582383e79e"). InnerVolumeSpecName "kube-api-access-cb9vx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.296588 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "746b8387-f9ba-45f0-9650-ce582383e79e" (UID: "746b8387-f9ba-45f0-9650-ce582383e79e"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.311920 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/746b8387-f9ba-45f0-9650-ce582383e79e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "746b8387-f9ba-45f0-9650-ce582383e79e" (UID: "746b8387-f9ba-45f0-9650-ce582383e79e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.334564 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/746b8387-f9ba-45f0-9650-ce582383e79e-config-data" (OuterVolumeSpecName: "config-data") pod "746b8387-f9ba-45f0-9650-ce582383e79e" (UID: "746b8387-f9ba-45f0-9650-ce582383e79e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.390963 4990 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.390996 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb9vx\" (UniqueName: \"kubernetes.io/projected/746b8387-f9ba-45f0-9650-ce582383e79e-kube-api-access-cb9vx\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.391006 4990 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/746b8387-f9ba-45f0-9650-ce582383e79e-logs\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.391015 4990 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/746b8387-f9ba-45f0-9650-ce582383e79e-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.391024 4990 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/746b8387-f9ba-45f0-9650-ce582383e79e-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.391032 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/746b8387-f9ba-45f0-9650-ce582383e79e-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.391042 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/746b8387-f9ba-45f0-9650-ce582383e79e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.407763 4990 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.492123 4990 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.560883 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.570557 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fr28q" event={"ID":"04c2a959-3818-40b2-8182-7fa3287ee0df","Type":"ContainerDied","Data":"750d005d420f178223f660e800357b879d55b21927ed874cb1fcdd7f487e051e"} Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.570594 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="750d005d420f178223f660e800357b879d55b21927ed874cb1fcdd7f487e051e" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.570812 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fr28q" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.585095 4990 generic.go:334] "Generic (PLEG): container finished" podID="746b8387-f9ba-45f0-9650-ce582383e79e" containerID="b0a5752a25bbddf0ee01f4c6317ef985a2f3b0c5f29bce24226eac731e3da76c" exitCode=0 Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.585125 4990 generic.go:334] "Generic (PLEG): container finished" podID="746b8387-f9ba-45f0-9650-ce582383e79e" containerID="6f2665145a2e63cf698f06b1cece8832fc0ff1818788c4d096ebf239b3090280" exitCode=143 Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.585162 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"746b8387-f9ba-45f0-9650-ce582383e79e","Type":"ContainerDied","Data":"b0a5752a25bbddf0ee01f4c6317ef985a2f3b0c5f29bce24226eac731e3da76c"} Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.585189 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"746b8387-f9ba-45f0-9650-ce582383e79e","Type":"ContainerDied","Data":"6f2665145a2e63cf698f06b1cece8832fc0ff1818788c4d096ebf239b3090280"} Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.585200 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"746b8387-f9ba-45f0-9650-ce582383e79e","Type":"ContainerDied","Data":"15308ddcc5d253ba8ec0edaea20877c5dcc3373ddd6017d61b7fb2750709c3b9"} Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.585215 4990 scope.go:117] "RemoveContainer" containerID="b0a5752a25bbddf0ee01f4c6317ef985a2f3b0c5f29bce24226eac731e3da76c" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.585294 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.590971 4990 generic.go:334] "Generic (PLEG): container finished" podID="62a8c0e5-85b3-46e5-8e1f-3939a1eafc14" containerID="3c9dc794e63045f795ad588361662d0beeff2af8e7ed2a579125f1e2b198d8d5" exitCode=0 Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.591031 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hr68g" event={"ID":"62a8c0e5-85b3-46e5-8e1f-3939a1eafc14","Type":"ContainerDied","Data":"3c9dc794e63045f795ad588361662d0beeff2af8e7ed2a579125f1e2b198d8d5"} Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.598989 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-n9mqs" event={"ID":"da5e4277-78a0-4eca-b9f6-67fc6c925ed1","Type":"ContainerDied","Data":"b5b996ae5aeed152b3b20ee1b7a7c4f625bcdf6c864253259b6e24a0b54ea650"} Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.599019 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5b996ae5aeed152b3b20ee1b7a7c4f625bcdf6c864253259b6e24a0b54ea650" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.599060 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-n9mqs" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.619019 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6c5f858c6d-zxwsh"] Dec 05 01:32:20 crc kubenswrapper[4990]: E1205 01:32:20.619432 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="746b8387-f9ba-45f0-9650-ce582383e79e" containerName="glance-httpd" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.619453 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="746b8387-f9ba-45f0-9650-ce582383e79e" containerName="glance-httpd" Dec 05 01:32:20 crc kubenswrapper[4990]: E1205 01:32:20.619540 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04c2a959-3818-40b2-8182-7fa3287ee0df" containerName="neutron-db-sync" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.619550 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="04c2a959-3818-40b2-8182-7fa3287ee0df" containerName="neutron-db-sync" Dec 05 01:32:20 crc kubenswrapper[4990]: E1205 01:32:20.619574 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="746b8387-f9ba-45f0-9650-ce582383e79e" containerName="glance-log" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.619583 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="746b8387-f9ba-45f0-9650-ce582383e79e" containerName="glance-log" Dec 05 01:32:20 crc kubenswrapper[4990]: E1205 01:32:20.619595 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da5e4277-78a0-4eca-b9f6-67fc6c925ed1" containerName="placement-db-sync" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.619603 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="da5e4277-78a0-4eca-b9f6-67fc6c925ed1" containerName="placement-db-sync" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.619817 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="04c2a959-3818-40b2-8182-7fa3287ee0df" containerName="neutron-db-sync" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.619846 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="746b8387-f9ba-45f0-9650-ce582383e79e" containerName="glance-httpd" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.619874 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="746b8387-f9ba-45f0-9650-ce582383e79e" containerName="glance-log" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.619887 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="da5e4277-78a0-4eca-b9f6-67fc6c925ed1" containerName="placement-db-sync" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.621050 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c5f858c6d-zxwsh" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.626824 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.627017 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.626830 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.627228 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-rq8p4" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.627258 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.642069 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.701250 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6c5f858c6d-zxwsh"] Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.724510 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.741266 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.742850 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.744956 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.745211 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.786052 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.800436 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82eb03c9-869c-447d-9b78-b4ef916b59ac-internal-tls-certs\") pod \"placement-6c5f858c6d-zxwsh\" (UID: \"82eb03c9-869c-447d-9b78-b4ef916b59ac\") " pod="openstack/placement-6c5f858c6d-zxwsh" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.800491 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82eb03c9-869c-447d-9b78-b4ef916b59ac-scripts\") pod \"placement-6c5f858c6d-zxwsh\" (UID: \"82eb03c9-869c-447d-9b78-b4ef916b59ac\") " pod="openstack/placement-6c5f858c6d-zxwsh" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.800522 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82eb03c9-869c-447d-9b78-b4ef916b59ac-combined-ca-bundle\") pod \"placement-6c5f858c6d-zxwsh\" (UID: \"82eb03c9-869c-447d-9b78-b4ef916b59ac\") " pod="openstack/placement-6c5f858c6d-zxwsh" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.800589 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82eb03c9-869c-447d-9b78-b4ef916b59ac-config-data\") pod \"placement-6c5f858c6d-zxwsh\" (UID: \"82eb03c9-869c-447d-9b78-b4ef916b59ac\") " pod="openstack/placement-6c5f858c6d-zxwsh" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.800642 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f4rr\" (UniqueName: \"kubernetes.io/projected/82eb03c9-869c-447d-9b78-b4ef916b59ac-kube-api-access-2f4rr\") pod \"placement-6c5f858c6d-zxwsh\" (UID: \"82eb03c9-869c-447d-9b78-b4ef916b59ac\") " pod="openstack/placement-6c5f858c6d-zxwsh" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.800684 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82eb03c9-869c-447d-9b78-b4ef916b59ac-logs\") pod \"placement-6c5f858c6d-zxwsh\" (UID: \"82eb03c9-869c-447d-9b78-b4ef916b59ac\") " pod="openstack/placement-6c5f858c6d-zxwsh" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.800718 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82eb03c9-869c-447d-9b78-b4ef916b59ac-public-tls-certs\") pod \"placement-6c5f858c6d-zxwsh\" (UID: \"82eb03c9-869c-447d-9b78-b4ef916b59ac\") " pod="openstack/placement-6c5f858c6d-zxwsh" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.833974 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-tjw7g"] Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.834244 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-tjw7g" podUID="c2582b55-f142-43da-9aac-24ccc08026c2" containerName="dnsmasq-dns" containerID="cri-o://eb7853ec4615bd3b4f8b1cfe4db92abd0558f4507366bce8aa9b848b7dcc73aa" gracePeriod=10 Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.882084 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7rhp4"] Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.884144 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-7rhp4" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.898586 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-64869d6796-xppnk"] Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.900210 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64869d6796-xppnk" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.903197 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7b19a4b-c55b-4845-ae65-09442bb0a29f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.903234 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82eb03c9-869c-447d-9b78-b4ef916b59ac-config-data\") pod \"placement-6c5f858c6d-zxwsh\" (UID: \"82eb03c9-869c-447d-9b78-b4ef916b59ac\") " pod="openstack/placement-6c5f858c6d-zxwsh" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.903276 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7b19a4b-c55b-4845-ae65-09442bb0a29f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.903307 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f4rr\" (UniqueName: \"kubernetes.io/projected/82eb03c9-869c-447d-9b78-b4ef916b59ac-kube-api-access-2f4rr\") pod \"placement-6c5f858c6d-zxwsh\" (UID: \"82eb03c9-869c-447d-9b78-b4ef916b59ac\") " pod="openstack/placement-6c5f858c6d-zxwsh" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.903323 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7b19a4b-c55b-4845-ae65-09442bb0a29f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.903348 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7b19a4b-c55b-4845-ae65-09442bb0a29f-logs\") pod \"glance-default-internal-api-0\" (UID: \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.903379 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7b19a4b-c55b-4845-ae65-09442bb0a29f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.903398 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82eb03c9-869c-447d-9b78-b4ef916b59ac-logs\") pod \"placement-6c5f858c6d-zxwsh\" (UID: \"82eb03c9-869c-447d-9b78-b4ef916b59ac\") " pod="openstack/placement-6c5f858c6d-zxwsh" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.903415 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbzzv\" (UniqueName: \"kubernetes.io/projected/f7b19a4b-c55b-4845-ae65-09442bb0a29f-kube-api-access-bbzzv\") pod \"glance-default-internal-api-0\" (UID: \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.903544 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.903585 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82eb03c9-869c-447d-9b78-b4ef916b59ac-public-tls-certs\") pod \"placement-6c5f858c6d-zxwsh\" (UID: \"82eb03c9-869c-447d-9b78-b4ef916b59ac\") " pod="openstack/placement-6c5f858c6d-zxwsh" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.903708 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82eb03c9-869c-447d-9b78-b4ef916b59ac-scripts\") pod \"placement-6c5f858c6d-zxwsh\" (UID: \"82eb03c9-869c-447d-9b78-b4ef916b59ac\") " pod="openstack/placement-6c5f858c6d-zxwsh" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.903726 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82eb03c9-869c-447d-9b78-b4ef916b59ac-internal-tls-certs\") pod \"placement-6c5f858c6d-zxwsh\" (UID: \"82eb03c9-869c-447d-9b78-b4ef916b59ac\") " pod="openstack/placement-6c5f858c6d-zxwsh" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.905444 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.905644 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-zpj4v" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.905671 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82eb03c9-869c-447d-9b78-b4ef916b59ac-logs\") pod \"placement-6c5f858c6d-zxwsh\" (UID: \"82eb03c9-869c-447d-9b78-b4ef916b59ac\") " pod="openstack/placement-6c5f858c6d-zxwsh" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.905756 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.905797 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.905932 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7b19a4b-c55b-4845-ae65-09442bb0a29f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.905963 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82eb03c9-869c-447d-9b78-b4ef916b59ac-combined-ca-bundle\") pod \"placement-6c5f858c6d-zxwsh\" (UID: \"82eb03c9-869c-447d-9b78-b4ef916b59ac\") " pod="openstack/placement-6c5f858c6d-zxwsh" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.911825 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82eb03c9-869c-447d-9b78-b4ef916b59ac-scripts\") pod \"placement-6c5f858c6d-zxwsh\" (UID: \"82eb03c9-869c-447d-9b78-b4ef916b59ac\") " pod="openstack/placement-6c5f858c6d-zxwsh" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.911842 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82eb03c9-869c-447d-9b78-b4ef916b59ac-public-tls-certs\") pod \"placement-6c5f858c6d-zxwsh\" (UID: \"82eb03c9-869c-447d-9b78-b4ef916b59ac\") " pod="openstack/placement-6c5f858c6d-zxwsh" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.913280 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82eb03c9-869c-447d-9b78-b4ef916b59ac-config-data\") pod \"placement-6c5f858c6d-zxwsh\" (UID: \"82eb03c9-869c-447d-9b78-b4ef916b59ac\") " pod="openstack/placement-6c5f858c6d-zxwsh" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.914879 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82eb03c9-869c-447d-9b78-b4ef916b59ac-internal-tls-certs\") pod \"placement-6c5f858c6d-zxwsh\" (UID: \"82eb03c9-869c-447d-9b78-b4ef916b59ac\") " pod="openstack/placement-6c5f858c6d-zxwsh" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.917090 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7rhp4"] Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.917977 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82eb03c9-869c-447d-9b78-b4ef916b59ac-combined-ca-bundle\") pod \"placement-6c5f858c6d-zxwsh\" (UID: \"82eb03c9-869c-447d-9b78-b4ef916b59ac\") " pod="openstack/placement-6c5f858c6d-zxwsh" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.926698 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64869d6796-xppnk"] Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.927499 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f4rr\" (UniqueName: \"kubernetes.io/projected/82eb03c9-869c-447d-9b78-b4ef916b59ac-kube-api-access-2f4rr\") pod \"placement-6c5f858c6d-zxwsh\" (UID: \"82eb03c9-869c-447d-9b78-b4ef916b59ac\") " pod="openstack/placement-6c5f858c6d-zxwsh" Dec 05 01:32:20 crc kubenswrapper[4990]: I1205 01:32:20.943262 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c5f858c6d-zxwsh" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.007183 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9631ba6-27a2-4f46-a578-e3f4998aca10-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-7rhp4\" (UID: \"e9631ba6-27a2-4f46-a578-e3f4998aca10\") " pod="openstack/dnsmasq-dns-55f844cf75-7rhp4" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.007230 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b7d9b4ac-28a9-4f92-9313-f93dc53ca476-config\") pod \"neutron-64869d6796-xppnk\" (UID: \"b7d9b4ac-28a9-4f92-9313-f93dc53ca476\") " pod="openstack/neutron-64869d6796-xppnk" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.007287 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7b19a4b-c55b-4845-ae65-09442bb0a29f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.009867 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9631ba6-27a2-4f46-a578-e3f4998aca10-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-7rhp4\" (UID: \"e9631ba6-27a2-4f46-a578-e3f4998aca10\") " pod="openstack/dnsmasq-dns-55f844cf75-7rhp4" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.009920 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gckpw\" (UniqueName: \"kubernetes.io/projected/b7d9b4ac-28a9-4f92-9313-f93dc53ca476-kube-api-access-gckpw\") pod \"neutron-64869d6796-xppnk\" (UID: \"b7d9b4ac-28a9-4f92-9313-f93dc53ca476\") " pod="openstack/neutron-64869d6796-xppnk" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.009955 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7b19a4b-c55b-4845-ae65-09442bb0a29f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.009976 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b7d9b4ac-28a9-4f92-9313-f93dc53ca476-httpd-config\") pod \"neutron-64869d6796-xppnk\" (UID: \"b7d9b4ac-28a9-4f92-9313-f93dc53ca476\") " pod="openstack/neutron-64869d6796-xppnk" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.010038 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7b19a4b-c55b-4845-ae65-09442bb0a29f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.010077 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkbdp\" (UniqueName: \"kubernetes.io/projected/e9631ba6-27a2-4f46-a578-e3f4998aca10-kube-api-access-pkbdp\") pod \"dnsmasq-dns-55f844cf75-7rhp4\" (UID: \"e9631ba6-27a2-4f46-a578-e3f4998aca10\") " pod="openstack/dnsmasq-dns-55f844cf75-7rhp4" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.010099 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9631ba6-27a2-4f46-a578-e3f4998aca10-config\") pod \"dnsmasq-dns-55f844cf75-7rhp4\" (UID: \"e9631ba6-27a2-4f46-a578-e3f4998aca10\") " pod="openstack/dnsmasq-dns-55f844cf75-7rhp4" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.010119 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7b19a4b-c55b-4845-ae65-09442bb0a29f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.010140 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7d9b4ac-28a9-4f92-9313-f93dc53ca476-ovndb-tls-certs\") pod \"neutron-64869d6796-xppnk\" (UID: \"b7d9b4ac-28a9-4f92-9313-f93dc53ca476\") " pod="openstack/neutron-64869d6796-xppnk" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.010161 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7b19a4b-c55b-4845-ae65-09442bb0a29f-logs\") pod \"glance-default-internal-api-0\" (UID: \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.010186 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7d9b4ac-28a9-4f92-9313-f93dc53ca476-combined-ca-bundle\") pod \"neutron-64869d6796-xppnk\" (UID: \"b7d9b4ac-28a9-4f92-9313-f93dc53ca476\") " pod="openstack/neutron-64869d6796-xppnk" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.010221 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e9631ba6-27a2-4f46-a578-e3f4998aca10-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-7rhp4\" (UID: \"e9631ba6-27a2-4f46-a578-e3f4998aca10\") " pod="openstack/dnsmasq-dns-55f844cf75-7rhp4" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.010241 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7b19a4b-c55b-4845-ae65-09442bb0a29f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.010261 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbzzv\" (UniqueName: \"kubernetes.io/projected/f7b19a4b-c55b-4845-ae65-09442bb0a29f-kube-api-access-bbzzv\") pod \"glance-default-internal-api-0\" (UID: \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.010289 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.010304 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9631ba6-27a2-4f46-a578-e3f4998aca10-dns-svc\") pod \"dnsmasq-dns-55f844cf75-7rhp4\" (UID: \"e9631ba6-27a2-4f46-a578-e3f4998aca10\") " pod="openstack/dnsmasq-dns-55f844cf75-7rhp4" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.011968 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7b19a4b-c55b-4845-ae65-09442bb0a29f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.012535 4990 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.013868 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7b19a4b-c55b-4845-ae65-09442bb0a29f-logs\") pod \"glance-default-internal-api-0\" (UID: \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.013966 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7b19a4b-c55b-4845-ae65-09442bb0a29f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.019819 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7b19a4b-c55b-4845-ae65-09442bb0a29f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.020523 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7b19a4b-c55b-4845-ae65-09442bb0a29f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.026322 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7b19a4b-c55b-4845-ae65-09442bb0a29f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.044069 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbzzv\" (UniqueName: \"kubernetes.io/projected/f7b19a4b-c55b-4845-ae65-09442bb0a29f-kube-api-access-bbzzv\") pod \"glance-default-internal-api-0\" (UID: \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.059573 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.094913 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.111982 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7d9b4ac-28a9-4f92-9313-f93dc53ca476-combined-ca-bundle\") pod \"neutron-64869d6796-xppnk\" (UID: \"b7d9b4ac-28a9-4f92-9313-f93dc53ca476\") " pod="openstack/neutron-64869d6796-xppnk" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.112054 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e9631ba6-27a2-4f46-a578-e3f4998aca10-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-7rhp4\" (UID: \"e9631ba6-27a2-4f46-a578-e3f4998aca10\") " pod="openstack/dnsmasq-dns-55f844cf75-7rhp4" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.112107 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9631ba6-27a2-4f46-a578-e3f4998aca10-dns-svc\") pod \"dnsmasq-dns-55f844cf75-7rhp4\" (UID: \"e9631ba6-27a2-4f46-a578-e3f4998aca10\") " pod="openstack/dnsmasq-dns-55f844cf75-7rhp4" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.112145 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9631ba6-27a2-4f46-a578-e3f4998aca10-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-7rhp4\" (UID: \"e9631ba6-27a2-4f46-a578-e3f4998aca10\") " pod="openstack/dnsmasq-dns-55f844cf75-7rhp4" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.112170 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b7d9b4ac-28a9-4f92-9313-f93dc53ca476-config\") pod \"neutron-64869d6796-xppnk\" (UID: \"b7d9b4ac-28a9-4f92-9313-f93dc53ca476\") " pod="openstack/neutron-64869d6796-xppnk" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.112189 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9631ba6-27a2-4f46-a578-e3f4998aca10-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-7rhp4\" (UID: \"e9631ba6-27a2-4f46-a578-e3f4998aca10\") " pod="openstack/dnsmasq-dns-55f844cf75-7rhp4" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.112217 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gckpw\" (UniqueName: \"kubernetes.io/projected/b7d9b4ac-28a9-4f92-9313-f93dc53ca476-kube-api-access-gckpw\") pod \"neutron-64869d6796-xppnk\" (UID: \"b7d9b4ac-28a9-4f92-9313-f93dc53ca476\") " pod="openstack/neutron-64869d6796-xppnk" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.112241 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b7d9b4ac-28a9-4f92-9313-f93dc53ca476-httpd-config\") pod \"neutron-64869d6796-xppnk\" (UID: \"b7d9b4ac-28a9-4f92-9313-f93dc53ca476\") " pod="openstack/neutron-64869d6796-xppnk" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.112291 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkbdp\" (UniqueName: \"kubernetes.io/projected/e9631ba6-27a2-4f46-a578-e3f4998aca10-kube-api-access-pkbdp\") pod \"dnsmasq-dns-55f844cf75-7rhp4\" (UID: \"e9631ba6-27a2-4f46-a578-e3f4998aca10\") " pod="openstack/dnsmasq-dns-55f844cf75-7rhp4" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.112312 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9631ba6-27a2-4f46-a578-e3f4998aca10-config\") pod \"dnsmasq-dns-55f844cf75-7rhp4\" (UID: \"e9631ba6-27a2-4f46-a578-e3f4998aca10\") " pod="openstack/dnsmasq-dns-55f844cf75-7rhp4" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.112334 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7d9b4ac-28a9-4f92-9313-f93dc53ca476-ovndb-tls-certs\") pod \"neutron-64869d6796-xppnk\" (UID: \"b7d9b4ac-28a9-4f92-9313-f93dc53ca476\") " pod="openstack/neutron-64869d6796-xppnk" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.113008 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e9631ba6-27a2-4f46-a578-e3f4998aca10-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-7rhp4\" (UID: \"e9631ba6-27a2-4f46-a578-e3f4998aca10\") " pod="openstack/dnsmasq-dns-55f844cf75-7rhp4" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.113471 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9631ba6-27a2-4f46-a578-e3f4998aca10-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-7rhp4\" (UID: \"e9631ba6-27a2-4f46-a578-e3f4998aca10\") " pod="openstack/dnsmasq-dns-55f844cf75-7rhp4" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.113862 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9631ba6-27a2-4f46-a578-e3f4998aca10-config\") pod \"dnsmasq-dns-55f844cf75-7rhp4\" (UID: \"e9631ba6-27a2-4f46-a578-e3f4998aca10\") " pod="openstack/dnsmasq-dns-55f844cf75-7rhp4" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.114113 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9631ba6-27a2-4f46-a578-e3f4998aca10-dns-svc\") pod \"dnsmasq-dns-55f844cf75-7rhp4\" (UID: \"e9631ba6-27a2-4f46-a578-e3f4998aca10\") " pod="openstack/dnsmasq-dns-55f844cf75-7rhp4" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.115438 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9631ba6-27a2-4f46-a578-e3f4998aca10-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-7rhp4\" (UID: \"e9631ba6-27a2-4f46-a578-e3f4998aca10\") " pod="openstack/dnsmasq-dns-55f844cf75-7rhp4" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.118225 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b7d9b4ac-28a9-4f92-9313-f93dc53ca476-config\") pod \"neutron-64869d6796-xppnk\" (UID: \"b7d9b4ac-28a9-4f92-9313-f93dc53ca476\") " pod="openstack/neutron-64869d6796-xppnk" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.121534 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7d9b4ac-28a9-4f92-9313-f93dc53ca476-combined-ca-bundle\") pod \"neutron-64869d6796-xppnk\" (UID: \"b7d9b4ac-28a9-4f92-9313-f93dc53ca476\") " pod="openstack/neutron-64869d6796-xppnk" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.121688 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7d9b4ac-28a9-4f92-9313-f93dc53ca476-ovndb-tls-certs\") pod \"neutron-64869d6796-xppnk\" (UID: \"b7d9b4ac-28a9-4f92-9313-f93dc53ca476\") " pod="openstack/neutron-64869d6796-xppnk" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.123493 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b7d9b4ac-28a9-4f92-9313-f93dc53ca476-httpd-config\") pod \"neutron-64869d6796-xppnk\" (UID: \"b7d9b4ac-28a9-4f92-9313-f93dc53ca476\") " pod="openstack/neutron-64869d6796-xppnk" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.128523 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gckpw\" (UniqueName: \"kubernetes.io/projected/b7d9b4ac-28a9-4f92-9313-f93dc53ca476-kube-api-access-gckpw\") pod \"neutron-64869d6796-xppnk\" (UID: \"b7d9b4ac-28a9-4f92-9313-f93dc53ca476\") " pod="openstack/neutron-64869d6796-xppnk" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.130417 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkbdp\" (UniqueName: \"kubernetes.io/projected/e9631ba6-27a2-4f46-a578-e3f4998aca10-kube-api-access-pkbdp\") pod \"dnsmasq-dns-55f844cf75-7rhp4\" (UID: \"e9631ba6-27a2-4f46-a578-e3f4998aca10\") " pod="openstack/dnsmasq-dns-55f844cf75-7rhp4" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.300062 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-7rhp4" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.313016 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64869d6796-xppnk" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.610259 4990 generic.go:334] "Generic (PLEG): container finished" podID="c2582b55-f142-43da-9aac-24ccc08026c2" containerID="eb7853ec4615bd3b4f8b1cfe4db92abd0558f4507366bce8aa9b848b7dcc73aa" exitCode=0 Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.610422 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-tjw7g" event={"ID":"c2582b55-f142-43da-9aac-24ccc08026c2","Type":"ContainerDied","Data":"eb7853ec4615bd3b4f8b1cfe4db92abd0558f4507366bce8aa9b848b7dcc73aa"} Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.823824 4990 patch_prober.go:28] interesting pod/machine-config-daemon-zxlh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.823867 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 01:32:21 crc kubenswrapper[4990]: I1205 01:32:21.942535 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="746b8387-f9ba-45f0-9650-ce582383e79e" path="/var/lib/kubelet/pods/746b8387-f9ba-45f0-9650-ce582383e79e/volumes" Dec 05 01:32:22 crc kubenswrapper[4990]: I1205 01:32:22.166589 4990 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-tjw7g" podUID="c2582b55-f142-43da-9aac-24ccc08026c2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.141:5353: connect: connection refused" Dec 05 01:32:23 crc kubenswrapper[4990]: I1205 01:32:23.126326 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7fdcd7bc79-skn69"] Dec 05 01:32:23 crc kubenswrapper[4990]: I1205 01:32:23.127818 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7fdcd7bc79-skn69" Dec 05 01:32:23 crc kubenswrapper[4990]: I1205 01:32:23.129863 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 05 01:32:23 crc kubenswrapper[4990]: I1205 01:32:23.130117 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 05 01:32:23 crc kubenswrapper[4990]: I1205 01:32:23.145734 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7fdcd7bc79-skn69"] Dec 05 01:32:23 crc kubenswrapper[4990]: I1205 01:32:23.251080 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr24p\" (UniqueName: \"kubernetes.io/projected/60d8e2e9-244e-48b4-b99f-2606dc492482-kube-api-access-dr24p\") pod \"neutron-7fdcd7bc79-skn69\" (UID: \"60d8e2e9-244e-48b4-b99f-2606dc492482\") " pod="openstack/neutron-7fdcd7bc79-skn69" Dec 05 01:32:23 crc kubenswrapper[4990]: I1205 01:32:23.251122 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-httpd-config\") pod \"neutron-7fdcd7bc79-skn69\" (UID: \"60d8e2e9-244e-48b4-b99f-2606dc492482\") " pod="openstack/neutron-7fdcd7bc79-skn69" Dec 05 01:32:23 crc kubenswrapper[4990]: I1205 01:32:23.251146 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-public-tls-certs\") pod \"neutron-7fdcd7bc79-skn69\" (UID: \"60d8e2e9-244e-48b4-b99f-2606dc492482\") " pod="openstack/neutron-7fdcd7bc79-skn69" Dec 05 01:32:23 crc kubenswrapper[4990]: I1205 01:32:23.251179 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-config\") pod \"neutron-7fdcd7bc79-skn69\" (UID: \"60d8e2e9-244e-48b4-b99f-2606dc492482\") " pod="openstack/neutron-7fdcd7bc79-skn69" Dec 05 01:32:23 crc kubenswrapper[4990]: I1205 01:32:23.251583 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-internal-tls-certs\") pod \"neutron-7fdcd7bc79-skn69\" (UID: \"60d8e2e9-244e-48b4-b99f-2606dc492482\") " pod="openstack/neutron-7fdcd7bc79-skn69" Dec 05 01:32:23 crc kubenswrapper[4990]: I1205 01:32:23.251693 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-combined-ca-bundle\") pod \"neutron-7fdcd7bc79-skn69\" (UID: \"60d8e2e9-244e-48b4-b99f-2606dc492482\") " pod="openstack/neutron-7fdcd7bc79-skn69" Dec 05 01:32:23 crc kubenswrapper[4990]: I1205 01:32:23.251819 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-ovndb-tls-certs\") pod \"neutron-7fdcd7bc79-skn69\" (UID: \"60d8e2e9-244e-48b4-b99f-2606dc492482\") " pod="openstack/neutron-7fdcd7bc79-skn69" Dec 05 01:32:23 crc kubenswrapper[4990]: I1205 01:32:23.352833 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-config\") pod \"neutron-7fdcd7bc79-skn69\" (UID: \"60d8e2e9-244e-48b4-b99f-2606dc492482\") " pod="openstack/neutron-7fdcd7bc79-skn69" Dec 05 01:32:23 crc kubenswrapper[4990]: I1205 01:32:23.352885 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-internal-tls-certs\") pod \"neutron-7fdcd7bc79-skn69\" (UID: \"60d8e2e9-244e-48b4-b99f-2606dc492482\") " pod="openstack/neutron-7fdcd7bc79-skn69" Dec 05 01:32:23 crc kubenswrapper[4990]: I1205 01:32:23.352919 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-combined-ca-bundle\") pod \"neutron-7fdcd7bc79-skn69\" (UID: \"60d8e2e9-244e-48b4-b99f-2606dc492482\") " pod="openstack/neutron-7fdcd7bc79-skn69" Dec 05 01:32:23 crc kubenswrapper[4990]: I1205 01:32:23.352950 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-ovndb-tls-certs\") pod \"neutron-7fdcd7bc79-skn69\" (UID: \"60d8e2e9-244e-48b4-b99f-2606dc492482\") " pod="openstack/neutron-7fdcd7bc79-skn69" Dec 05 01:32:23 crc kubenswrapper[4990]: I1205 01:32:23.353020 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr24p\" (UniqueName: \"kubernetes.io/projected/60d8e2e9-244e-48b4-b99f-2606dc492482-kube-api-access-dr24p\") pod \"neutron-7fdcd7bc79-skn69\" (UID: \"60d8e2e9-244e-48b4-b99f-2606dc492482\") " pod="openstack/neutron-7fdcd7bc79-skn69" Dec 05 01:32:23 crc kubenswrapper[4990]: I1205 01:32:23.353040 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-httpd-config\") pod \"neutron-7fdcd7bc79-skn69\" (UID: \"60d8e2e9-244e-48b4-b99f-2606dc492482\") " pod="openstack/neutron-7fdcd7bc79-skn69" Dec 05 01:32:23 crc kubenswrapper[4990]: I1205 01:32:23.353063 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-public-tls-certs\") pod \"neutron-7fdcd7bc79-skn69\" (UID: \"60d8e2e9-244e-48b4-b99f-2606dc492482\") " pod="openstack/neutron-7fdcd7bc79-skn69" Dec 05 01:32:23 crc kubenswrapper[4990]: I1205 01:32:23.358696 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-internal-tls-certs\") pod \"neutron-7fdcd7bc79-skn69\" (UID: \"60d8e2e9-244e-48b4-b99f-2606dc492482\") " pod="openstack/neutron-7fdcd7bc79-skn69" Dec 05 01:32:23 crc kubenswrapper[4990]: I1205 01:32:23.359078 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-httpd-config\") pod \"neutron-7fdcd7bc79-skn69\" (UID: \"60d8e2e9-244e-48b4-b99f-2606dc492482\") " pod="openstack/neutron-7fdcd7bc79-skn69" Dec 05 01:32:23 crc kubenswrapper[4990]: I1205 01:32:23.359770 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-combined-ca-bundle\") pod \"neutron-7fdcd7bc79-skn69\" (UID: \"60d8e2e9-244e-48b4-b99f-2606dc492482\") " pod="openstack/neutron-7fdcd7bc79-skn69" Dec 05 01:32:23 crc kubenswrapper[4990]: I1205 01:32:23.360293 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-public-tls-certs\") pod \"neutron-7fdcd7bc79-skn69\" (UID: \"60d8e2e9-244e-48b4-b99f-2606dc492482\") " pod="openstack/neutron-7fdcd7bc79-skn69" Dec 05 01:32:23 crc kubenswrapper[4990]: I1205 01:32:23.363513 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-ovndb-tls-certs\") pod \"neutron-7fdcd7bc79-skn69\" (UID: \"60d8e2e9-244e-48b4-b99f-2606dc492482\") " pod="openstack/neutron-7fdcd7bc79-skn69" Dec 05 01:32:23 crc kubenswrapper[4990]: I1205 01:32:23.364187 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-config\") pod \"neutron-7fdcd7bc79-skn69\" (UID: \"60d8e2e9-244e-48b4-b99f-2606dc492482\") " pod="openstack/neutron-7fdcd7bc79-skn69" Dec 05 01:32:23 crc kubenswrapper[4990]: I1205 01:32:23.370053 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr24p\" (UniqueName: \"kubernetes.io/projected/60d8e2e9-244e-48b4-b99f-2606dc492482-kube-api-access-dr24p\") pod \"neutron-7fdcd7bc79-skn69\" (UID: \"60d8e2e9-244e-48b4-b99f-2606dc492482\") " pod="openstack/neutron-7fdcd7bc79-skn69" Dec 05 01:32:23 crc kubenswrapper[4990]: I1205 01:32:23.443237 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7fdcd7bc79-skn69" Dec 05 01:32:24 crc kubenswrapper[4990]: W1205 01:32:24.359585 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6fb5b68_4f83_45ef_986e_a527b3ebca9e.slice/crio-2cdfcccbf806e8c8474a732a23c94d7a00225812631ac17f40244e3366b7efa1 WatchSource:0}: Error finding container 2cdfcccbf806e8c8474a732a23c94d7a00225812631ac17f40244e3366b7efa1: Status 404 returned error can't find the container with id 2cdfcccbf806e8c8474a732a23c94d7a00225812631ac17f40244e3366b7efa1 Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.431832 4990 scope.go:117] "RemoveContainer" containerID="6f2665145a2e63cf698f06b1cece8832fc0ff1818788c4d096ebf239b3090280" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.491373 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hr68g" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.574605 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-config-data\") pod \"62a8c0e5-85b3-46e5-8e1f-3939a1eafc14\" (UID: \"62a8c0e5-85b3-46e5-8e1f-3939a1eafc14\") " Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.574690 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-credential-keys\") pod \"62a8c0e5-85b3-46e5-8e1f-3939a1eafc14\" (UID: \"62a8c0e5-85b3-46e5-8e1f-3939a1eafc14\") " Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.574742 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-scripts\") pod \"62a8c0e5-85b3-46e5-8e1f-3939a1eafc14\" (UID: \"62a8c0e5-85b3-46e5-8e1f-3939a1eafc14\") " Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.574779 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-combined-ca-bundle\") pod \"62a8c0e5-85b3-46e5-8e1f-3939a1eafc14\" (UID: \"62a8c0e5-85b3-46e5-8e1f-3939a1eafc14\") " Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.574879 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmknx\" (UniqueName: \"kubernetes.io/projected/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-kube-api-access-kmknx\") pod \"62a8c0e5-85b3-46e5-8e1f-3939a1eafc14\" (UID: \"62a8c0e5-85b3-46e5-8e1f-3939a1eafc14\") " Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.574905 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-fernet-keys\") pod \"62a8c0e5-85b3-46e5-8e1f-3939a1eafc14\" (UID: \"62a8c0e5-85b3-46e5-8e1f-3939a1eafc14\") " Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.579355 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "62a8c0e5-85b3-46e5-8e1f-3939a1eafc14" (UID: "62a8c0e5-85b3-46e5-8e1f-3939a1eafc14"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.580745 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-scripts" (OuterVolumeSpecName: "scripts") pod "62a8c0e5-85b3-46e5-8e1f-3939a1eafc14" (UID: "62a8c0e5-85b3-46e5-8e1f-3939a1eafc14"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.581896 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "62a8c0e5-85b3-46e5-8e1f-3939a1eafc14" (UID: "62a8c0e5-85b3-46e5-8e1f-3939a1eafc14"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.586436 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-kube-api-access-kmknx" (OuterVolumeSpecName: "kube-api-access-kmknx") pod "62a8c0e5-85b3-46e5-8e1f-3939a1eafc14" (UID: "62a8c0e5-85b3-46e5-8e1f-3939a1eafc14"). InnerVolumeSpecName "kube-api-access-kmknx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.630746 4990 scope.go:117] "RemoveContainer" containerID="b0a5752a25bbddf0ee01f4c6317ef985a2f3b0c5f29bce24226eac731e3da76c" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.630936 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62a8c0e5-85b3-46e5-8e1f-3939a1eafc14" (UID: "62a8c0e5-85b3-46e5-8e1f-3939a1eafc14"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.632139 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-config-data" (OuterVolumeSpecName: "config-data") pod "62a8c0e5-85b3-46e5-8e1f-3939a1eafc14" (UID: "62a8c0e5-85b3-46e5-8e1f-3939a1eafc14"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:24 crc kubenswrapper[4990]: E1205 01:32:24.633813 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0a5752a25bbddf0ee01f4c6317ef985a2f3b0c5f29bce24226eac731e3da76c\": container with ID starting with b0a5752a25bbddf0ee01f4c6317ef985a2f3b0c5f29bce24226eac731e3da76c not found: ID does not exist" containerID="b0a5752a25bbddf0ee01f4c6317ef985a2f3b0c5f29bce24226eac731e3da76c" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.633842 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0a5752a25bbddf0ee01f4c6317ef985a2f3b0c5f29bce24226eac731e3da76c"} err="failed to get container status \"b0a5752a25bbddf0ee01f4c6317ef985a2f3b0c5f29bce24226eac731e3da76c\": rpc error: code = NotFound desc = could not find container \"b0a5752a25bbddf0ee01f4c6317ef985a2f3b0c5f29bce24226eac731e3da76c\": container with ID starting with b0a5752a25bbddf0ee01f4c6317ef985a2f3b0c5f29bce24226eac731e3da76c not found: ID does not exist" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.633865 4990 scope.go:117] "RemoveContainer" containerID="6f2665145a2e63cf698f06b1cece8832fc0ff1818788c4d096ebf239b3090280" Dec 05 01:32:24 crc kubenswrapper[4990]: E1205 01:32:24.640076 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f2665145a2e63cf698f06b1cece8832fc0ff1818788c4d096ebf239b3090280\": container with ID starting with 6f2665145a2e63cf698f06b1cece8832fc0ff1818788c4d096ebf239b3090280 not found: ID does not exist" containerID="6f2665145a2e63cf698f06b1cece8832fc0ff1818788c4d096ebf239b3090280" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.640277 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f2665145a2e63cf698f06b1cece8832fc0ff1818788c4d096ebf239b3090280"} err="failed to get container status \"6f2665145a2e63cf698f06b1cece8832fc0ff1818788c4d096ebf239b3090280\": rpc error: code = NotFound desc = could not find container \"6f2665145a2e63cf698f06b1cece8832fc0ff1818788c4d096ebf239b3090280\": container with ID starting with 6f2665145a2e63cf698f06b1cece8832fc0ff1818788c4d096ebf239b3090280 not found: ID does not exist" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.640305 4990 scope.go:117] "RemoveContainer" containerID="b0a5752a25bbddf0ee01f4c6317ef985a2f3b0c5f29bce24226eac731e3da76c" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.640701 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0a5752a25bbddf0ee01f4c6317ef985a2f3b0c5f29bce24226eac731e3da76c"} err="failed to get container status \"b0a5752a25bbddf0ee01f4c6317ef985a2f3b0c5f29bce24226eac731e3da76c\": rpc error: code = NotFound desc = could not find container \"b0a5752a25bbddf0ee01f4c6317ef985a2f3b0c5f29bce24226eac731e3da76c\": container with ID starting with b0a5752a25bbddf0ee01f4c6317ef985a2f3b0c5f29bce24226eac731e3da76c not found: ID does not exist" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.640720 4990 scope.go:117] "RemoveContainer" containerID="6f2665145a2e63cf698f06b1cece8832fc0ff1818788c4d096ebf239b3090280" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.641001 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f2665145a2e63cf698f06b1cece8832fc0ff1818788c4d096ebf239b3090280"} err="failed to get container status \"6f2665145a2e63cf698f06b1cece8832fc0ff1818788c4d096ebf239b3090280\": rpc error: code = NotFound desc = could not find container \"6f2665145a2e63cf698f06b1cece8832fc0ff1818788c4d096ebf239b3090280\": container with ID starting with 6f2665145a2e63cf698f06b1cece8832fc0ff1818788c4d096ebf239b3090280 not found: ID does not exist" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.647515 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e6fb5b68-4f83-45ef-986e-a527b3ebca9e","Type":"ContainerStarted","Data":"2cdfcccbf806e8c8474a732a23c94d7a00225812631ac17f40244e3366b7efa1"} Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.656126 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hr68g" event={"ID":"62a8c0e5-85b3-46e5-8e1f-3939a1eafc14","Type":"ContainerDied","Data":"2559b1e803e1faf3d83019464ced1d4c0dc44dc837a9baeffc4e35e0723d1c99"} Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.656158 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2559b1e803e1faf3d83019464ced1d4c0dc44dc837a9baeffc4e35e0723d1c99" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.656205 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hr68g" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.678969 4990 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.678996 4990 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.679011 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.679020 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmknx\" (UniqueName: \"kubernetes.io/projected/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-kube-api-access-kmknx\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.679029 4990 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.679038 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.706252 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-tjw7g" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.785082 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2582b55-f142-43da-9aac-24ccc08026c2-config\") pod \"c2582b55-f142-43da-9aac-24ccc08026c2\" (UID: \"c2582b55-f142-43da-9aac-24ccc08026c2\") " Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.785222 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkcv8\" (UniqueName: \"kubernetes.io/projected/c2582b55-f142-43da-9aac-24ccc08026c2-kube-api-access-mkcv8\") pod \"c2582b55-f142-43da-9aac-24ccc08026c2\" (UID: \"c2582b55-f142-43da-9aac-24ccc08026c2\") " Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.785273 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c2582b55-f142-43da-9aac-24ccc08026c2-dns-swift-storage-0\") pod \"c2582b55-f142-43da-9aac-24ccc08026c2\" (UID: \"c2582b55-f142-43da-9aac-24ccc08026c2\") " Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.785323 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2582b55-f142-43da-9aac-24ccc08026c2-ovsdbserver-sb\") pod \"c2582b55-f142-43da-9aac-24ccc08026c2\" (UID: \"c2582b55-f142-43da-9aac-24ccc08026c2\") " Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.785362 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2582b55-f142-43da-9aac-24ccc08026c2-ovsdbserver-nb\") pod \"c2582b55-f142-43da-9aac-24ccc08026c2\" (UID: \"c2582b55-f142-43da-9aac-24ccc08026c2\") " Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.785381 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2582b55-f142-43da-9aac-24ccc08026c2-dns-svc\") pod \"c2582b55-f142-43da-9aac-24ccc08026c2\" (UID: \"c2582b55-f142-43da-9aac-24ccc08026c2\") " Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.803012 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2582b55-f142-43da-9aac-24ccc08026c2-kube-api-access-mkcv8" (OuterVolumeSpecName: "kube-api-access-mkcv8") pod "c2582b55-f142-43da-9aac-24ccc08026c2" (UID: "c2582b55-f142-43da-9aac-24ccc08026c2"). InnerVolumeSpecName "kube-api-access-mkcv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.836130 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2582b55-f142-43da-9aac-24ccc08026c2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c2582b55-f142-43da-9aac-24ccc08026c2" (UID: "c2582b55-f142-43da-9aac-24ccc08026c2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.853011 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2582b55-f142-43da-9aac-24ccc08026c2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c2582b55-f142-43da-9aac-24ccc08026c2" (UID: "c2582b55-f142-43da-9aac-24ccc08026c2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.865973 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2582b55-f142-43da-9aac-24ccc08026c2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c2582b55-f142-43da-9aac-24ccc08026c2" (UID: "c2582b55-f142-43da-9aac-24ccc08026c2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.870708 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2582b55-f142-43da-9aac-24ccc08026c2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c2582b55-f142-43da-9aac-24ccc08026c2" (UID: "c2582b55-f142-43da-9aac-24ccc08026c2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.876809 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2582b55-f142-43da-9aac-24ccc08026c2-config" (OuterVolumeSpecName: "config") pod "c2582b55-f142-43da-9aac-24ccc08026c2" (UID: "c2582b55-f142-43da-9aac-24ccc08026c2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.887234 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkcv8\" (UniqueName: \"kubernetes.io/projected/c2582b55-f142-43da-9aac-24ccc08026c2-kube-api-access-mkcv8\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.887264 4990 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c2582b55-f142-43da-9aac-24ccc08026c2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.887273 4990 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2582b55-f142-43da-9aac-24ccc08026c2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.887281 4990 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2582b55-f142-43da-9aac-24ccc08026c2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.887290 4990 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2582b55-f142-43da-9aac-24ccc08026c2-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.887301 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2582b55-f142-43da-9aac-24ccc08026c2-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:24 crc kubenswrapper[4990]: I1205 01:32:24.975970 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7rhp4"] Dec 05 01:32:24 crc kubenswrapper[4990]: W1205 01:32:24.983874 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9631ba6_27a2_4f46_a578_e3f4998aca10.slice/crio-912c493283cb1f83d11d5163328fb4119365c97d31a4de0be4e24eff1464410d WatchSource:0}: Error finding container 912c493283cb1f83d11d5163328fb4119365c97d31a4de0be4e24eff1464410d: Status 404 returned error can't find the container with id 912c493283cb1f83d11d5163328fb4119365c97d31a4de0be4e24eff1464410d Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.133546 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6c5f858c6d-zxwsh"] Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.213524 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.306253 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7fdcd7bc79-skn69"] Dec 05 01:32:25 crc kubenswrapper[4990]: W1205 01:32:25.318520 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60d8e2e9_244e_48b4_b99f_2606dc492482.slice/crio-209a2976d8ce4f163ae305fe2afc6542db4baa1c285f4f5db32237e0ce4eaf53 WatchSource:0}: Error finding container 209a2976d8ce4f163ae305fe2afc6542db4baa1c285f4f5db32237e0ce4eaf53: Status 404 returned error can't find the container with id 209a2976d8ce4f163ae305fe2afc6542db4baa1c285f4f5db32237e0ce4eaf53 Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.417303 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64869d6796-xppnk"] Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.679228 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c5f858c6d-zxwsh" event={"ID":"82eb03c9-869c-447d-9b78-b4ef916b59ac","Type":"ContainerStarted","Data":"60103cd0845a8ee4ac3a9f4a4b4913b499c5becaac1a1bf97f551b44867160a5"} Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.679600 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5dc9df8c96-j8dx7"] Dec 05 01:32:25 crc kubenswrapper[4990]: E1205 01:32:25.680056 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2582b55-f142-43da-9aac-24ccc08026c2" containerName="init" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.680072 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2582b55-f142-43da-9aac-24ccc08026c2" containerName="init" Dec 05 01:32:25 crc kubenswrapper[4990]: E1205 01:32:25.680096 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2582b55-f142-43da-9aac-24ccc08026c2" containerName="dnsmasq-dns" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.680104 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2582b55-f142-43da-9aac-24ccc08026c2" containerName="dnsmasq-dns" Dec 05 01:32:25 crc kubenswrapper[4990]: E1205 01:32:25.680141 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a8c0e5-85b3-46e5-8e1f-3939a1eafc14" containerName="keystone-bootstrap" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.680149 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a8c0e5-85b3-46e5-8e1f-3939a1eafc14" containerName="keystone-bootstrap" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.680358 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="62a8c0e5-85b3-46e5-8e1f-3939a1eafc14" containerName="keystone-bootstrap" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.680383 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2582b55-f142-43da-9aac-24ccc08026c2" containerName="dnsmasq-dns" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.681102 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c5f858c6d-zxwsh" event={"ID":"82eb03c9-869c-447d-9b78-b4ef916b59ac","Type":"ContainerStarted","Data":"9b75c4c6401f15bf4e7fe79181580337ec0abfc49e524320e6166604b34401d8"} Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.681207 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5dc9df8c96-j8dx7" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.689392 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.691181 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.691452 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.691663 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2q9vq" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.691678 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.691933 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.696316 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67d8cea8-3def-4f61-838e-36ffae0c8705","Type":"ContainerStarted","Data":"e21c0a38ed8d9c45cefdb293f3ae8dd9cfc097ae4c78d7e392ae61375067595e"} Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.702525 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5dc9df8c96-j8dx7"] Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.714655 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e6fb5b68-4f83-45ef-986e-a527b3ebca9e","Type":"ContainerStarted","Data":"a39aa18dc38c07e356abc2caf86cec5d1844dd50b2ac31cb5b8de89bcdf0e096"} Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.745688 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f7b19a4b-c55b-4845-ae65-09442bb0a29f","Type":"ContainerStarted","Data":"f92b3157954ae1e0901abae8f1ca06362c77e8a1b87c7be59008bbae1ff2ade3"} Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.763192 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7fdcd7bc79-skn69" event={"ID":"60d8e2e9-244e-48b4-b99f-2606dc492482","Type":"ContainerStarted","Data":"209a2976d8ce4f163ae305fe2afc6542db4baa1c285f4f5db32237e0ce4eaf53"} Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.772851 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-tjw7g" event={"ID":"c2582b55-f142-43da-9aac-24ccc08026c2","Type":"ContainerDied","Data":"441b01e2b163f7ae310d26bc742aadf6e7de0ff3d6b6080d096380acf53b76cb"} Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.772886 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-tjw7g" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.772915 4990 scope.go:117] "RemoveContainer" containerID="eb7853ec4615bd3b4f8b1cfe4db92abd0558f4507366bce8aa9b848b7dcc73aa" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.778135 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64869d6796-xppnk" event={"ID":"b7d9b4ac-28a9-4f92-9313-f93dc53ca476","Type":"ContainerStarted","Data":"2bec6c1886d8470eaed12b62075ff75ea960952343796d547d68718e1fd20992"} Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.793948 4990 generic.go:334] "Generic (PLEG): container finished" podID="e9631ba6-27a2-4f46-a578-e3f4998aca10" containerID="8f20cd4f181abeb21ac324c1418e78d12452514563f0dfc9cc1c3f430fc9ef64" exitCode=0 Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.793994 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7rhp4" event={"ID":"e9631ba6-27a2-4f46-a578-e3f4998aca10","Type":"ContainerDied","Data":"8f20cd4f181abeb21ac324c1418e78d12452514563f0dfc9cc1c3f430fc9ef64"} Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.794023 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7rhp4" event={"ID":"e9631ba6-27a2-4f46-a578-e3f4998aca10","Type":"ContainerStarted","Data":"912c493283cb1f83d11d5163328fb4119365c97d31a4de0be4e24eff1464410d"} Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.814146 4990 scope.go:117] "RemoveContainer" containerID="22f8a3eeee8c338694123a2520b6f8852d4c8a9aed7d7e245132cab8ed1c47aa" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.816981 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88rxk\" (UniqueName: \"kubernetes.io/projected/dd33dbb9-4e51-47db-8129-a93493234f7f-kube-api-access-88rxk\") pod \"keystone-5dc9df8c96-j8dx7\" (UID: \"dd33dbb9-4e51-47db-8129-a93493234f7f\") " pod="openstack/keystone-5dc9df8c96-j8dx7" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.817029 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-fernet-keys\") pod \"keystone-5dc9df8c96-j8dx7\" (UID: \"dd33dbb9-4e51-47db-8129-a93493234f7f\") " pod="openstack/keystone-5dc9df8c96-j8dx7" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.817057 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-scripts\") pod \"keystone-5dc9df8c96-j8dx7\" (UID: \"dd33dbb9-4e51-47db-8129-a93493234f7f\") " pod="openstack/keystone-5dc9df8c96-j8dx7" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.817092 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-combined-ca-bundle\") pod \"keystone-5dc9df8c96-j8dx7\" (UID: \"dd33dbb9-4e51-47db-8129-a93493234f7f\") " pod="openstack/keystone-5dc9df8c96-j8dx7" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.817137 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-internal-tls-certs\") pod \"keystone-5dc9df8c96-j8dx7\" (UID: \"dd33dbb9-4e51-47db-8129-a93493234f7f\") " pod="openstack/keystone-5dc9df8c96-j8dx7" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.817166 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-public-tls-certs\") pod \"keystone-5dc9df8c96-j8dx7\" (UID: \"dd33dbb9-4e51-47db-8129-a93493234f7f\") " pod="openstack/keystone-5dc9df8c96-j8dx7" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.817196 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-credential-keys\") pod \"keystone-5dc9df8c96-j8dx7\" (UID: \"dd33dbb9-4e51-47db-8129-a93493234f7f\") " pod="openstack/keystone-5dc9df8c96-j8dx7" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.817242 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-config-data\") pod \"keystone-5dc9df8c96-j8dx7\" (UID: \"dd33dbb9-4e51-47db-8129-a93493234f7f\") " pod="openstack/keystone-5dc9df8c96-j8dx7" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.817340 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-tjw7g"] Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.841554 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-tjw7g"] Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.920358 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88rxk\" (UniqueName: \"kubernetes.io/projected/dd33dbb9-4e51-47db-8129-a93493234f7f-kube-api-access-88rxk\") pod \"keystone-5dc9df8c96-j8dx7\" (UID: \"dd33dbb9-4e51-47db-8129-a93493234f7f\") " pod="openstack/keystone-5dc9df8c96-j8dx7" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.920401 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-fernet-keys\") pod \"keystone-5dc9df8c96-j8dx7\" (UID: \"dd33dbb9-4e51-47db-8129-a93493234f7f\") " pod="openstack/keystone-5dc9df8c96-j8dx7" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.920431 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-scripts\") pod \"keystone-5dc9df8c96-j8dx7\" (UID: \"dd33dbb9-4e51-47db-8129-a93493234f7f\") " pod="openstack/keystone-5dc9df8c96-j8dx7" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.920467 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-combined-ca-bundle\") pod \"keystone-5dc9df8c96-j8dx7\" (UID: \"dd33dbb9-4e51-47db-8129-a93493234f7f\") " pod="openstack/keystone-5dc9df8c96-j8dx7" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.920505 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-internal-tls-certs\") pod \"keystone-5dc9df8c96-j8dx7\" (UID: \"dd33dbb9-4e51-47db-8129-a93493234f7f\") " pod="openstack/keystone-5dc9df8c96-j8dx7" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.920526 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-public-tls-certs\") pod \"keystone-5dc9df8c96-j8dx7\" (UID: \"dd33dbb9-4e51-47db-8129-a93493234f7f\") " pod="openstack/keystone-5dc9df8c96-j8dx7" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.920547 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-credential-keys\") pod \"keystone-5dc9df8c96-j8dx7\" (UID: \"dd33dbb9-4e51-47db-8129-a93493234f7f\") " pod="openstack/keystone-5dc9df8c96-j8dx7" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.920581 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-config-data\") pod \"keystone-5dc9df8c96-j8dx7\" (UID: \"dd33dbb9-4e51-47db-8129-a93493234f7f\") " pod="openstack/keystone-5dc9df8c96-j8dx7" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.942381 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-fernet-keys\") pod \"keystone-5dc9df8c96-j8dx7\" (UID: \"dd33dbb9-4e51-47db-8129-a93493234f7f\") " pod="openstack/keystone-5dc9df8c96-j8dx7" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.944137 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-combined-ca-bundle\") pod \"keystone-5dc9df8c96-j8dx7\" (UID: \"dd33dbb9-4e51-47db-8129-a93493234f7f\") " pod="openstack/keystone-5dc9df8c96-j8dx7" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.951197 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88rxk\" (UniqueName: \"kubernetes.io/projected/dd33dbb9-4e51-47db-8129-a93493234f7f-kube-api-access-88rxk\") pod \"keystone-5dc9df8c96-j8dx7\" (UID: \"dd33dbb9-4e51-47db-8129-a93493234f7f\") " pod="openstack/keystone-5dc9df8c96-j8dx7" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.951688 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-scripts\") pod \"keystone-5dc9df8c96-j8dx7\" (UID: \"dd33dbb9-4e51-47db-8129-a93493234f7f\") " pod="openstack/keystone-5dc9df8c96-j8dx7" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.951811 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2582b55-f142-43da-9aac-24ccc08026c2" path="/var/lib/kubelet/pods/c2582b55-f142-43da-9aac-24ccc08026c2/volumes" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.960249 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-config-data\") pod \"keystone-5dc9df8c96-j8dx7\" (UID: \"dd33dbb9-4e51-47db-8129-a93493234f7f\") " pod="openstack/keystone-5dc9df8c96-j8dx7" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.966015 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-credential-keys\") pod \"keystone-5dc9df8c96-j8dx7\" (UID: \"dd33dbb9-4e51-47db-8129-a93493234f7f\") " pod="openstack/keystone-5dc9df8c96-j8dx7" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.972455 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-public-tls-certs\") pod \"keystone-5dc9df8c96-j8dx7\" (UID: \"dd33dbb9-4e51-47db-8129-a93493234f7f\") " pod="openstack/keystone-5dc9df8c96-j8dx7" Dec 05 01:32:25 crc kubenswrapper[4990]: I1205 01:32:25.978709 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-internal-tls-certs\") pod \"keystone-5dc9df8c96-j8dx7\" (UID: \"dd33dbb9-4e51-47db-8129-a93493234f7f\") " pod="openstack/keystone-5dc9df8c96-j8dx7" Dec 05 01:32:26 crc kubenswrapper[4990]: I1205 01:32:26.010965 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5dc9df8c96-j8dx7" Dec 05 01:32:26 crc kubenswrapper[4990]: I1205 01:32:26.566390 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5dc9df8c96-j8dx7"] Dec 05 01:32:26 crc kubenswrapper[4990]: I1205 01:32:26.819319 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e6fb5b68-4f83-45ef-986e-a527b3ebca9e","Type":"ContainerStarted","Data":"544abadeec7efdd87d142f532801c8d40534098d72192ae34756e3fed5d92963"} Dec 05 01:32:26 crc kubenswrapper[4990]: I1205 01:32:26.821924 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f7b19a4b-c55b-4845-ae65-09442bb0a29f","Type":"ContainerStarted","Data":"c2f3a7ecfe85bc53d25f24925ad8fdbd1e77d2d5dbc23b3711c9b89b49e3ba37"} Dec 05 01:32:26 crc kubenswrapper[4990]: I1205 01:32:26.827537 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64869d6796-xppnk" event={"ID":"b7d9b4ac-28a9-4f92-9313-f93dc53ca476","Type":"ContainerStarted","Data":"fd26b786c1fe7a9d575a7b911d2719d38d86e70a87bd57838a1d6538234914e4"} Dec 05 01:32:26 crc kubenswrapper[4990]: I1205 01:32:26.827555 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64869d6796-xppnk" event={"ID":"b7d9b4ac-28a9-4f92-9313-f93dc53ca476","Type":"ContainerStarted","Data":"1496fbcecadbe5464193dcd2e14f7eb38eb8b68ae3af3b4d268b5721ac40fd37"} Dec 05 01:32:26 crc kubenswrapper[4990]: I1205 01:32:26.828332 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-64869d6796-xppnk" Dec 05 01:32:26 crc kubenswrapper[4990]: I1205 01:32:26.845064 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7fdcd7bc79-skn69" event={"ID":"60d8e2e9-244e-48b4-b99f-2606dc492482","Type":"ContainerStarted","Data":"ce448fae1000038dc6291f65727caed20202326c1f6236af230b2af7ffcb78b2"} Dec 05 01:32:26 crc kubenswrapper[4990]: I1205 01:32:26.845107 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7fdcd7bc79-skn69" event={"ID":"60d8e2e9-244e-48b4-b99f-2606dc492482","Type":"ContainerStarted","Data":"eac148450cb53affd7b9d7676017888bdad6962caf67e03295f4aca0531c7ef4"} Dec 05 01:32:26 crc kubenswrapper[4990]: I1205 01:32:26.848338 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7fdcd7bc79-skn69" Dec 05 01:32:26 crc kubenswrapper[4990]: I1205 01:32:26.850634 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5dc9df8c96-j8dx7" event={"ID":"dd33dbb9-4e51-47db-8129-a93493234f7f","Type":"ContainerStarted","Data":"3e9bebf78587e12677198988d037699ec4f8d730cb5ad058fa2ae4af7b220345"} Dec 05 01:32:26 crc kubenswrapper[4990]: I1205 01:32:26.857082 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c5f858c6d-zxwsh" event={"ID":"82eb03c9-869c-447d-9b78-b4ef916b59ac","Type":"ContainerStarted","Data":"5d677ac4cf7f17763cb57fdbd241dbbc43ee718b5236104aeaebafadb2a7637a"} Dec 05 01:32:26 crc kubenswrapper[4990]: I1205 01:32:26.860977 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6c5f858c6d-zxwsh" Dec 05 01:32:26 crc kubenswrapper[4990]: I1205 01:32:26.861045 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6c5f858c6d-zxwsh" Dec 05 01:32:26 crc kubenswrapper[4990]: I1205 01:32:26.867938 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7rhp4" event={"ID":"e9631ba6-27a2-4f46-a578-e3f4998aca10","Type":"ContainerStarted","Data":"605219d351bd577a6b91bfeb9b9bf74701c5142eed2ff346135484790de72014"} Dec 05 01:32:26 crc kubenswrapper[4990]: I1205 01:32:26.868192 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.868182184 podStartE2EDuration="7.868182184s" podCreationTimestamp="2025-12-05 01:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:32:26.841570639 +0000 UTC m=+1445.217786000" watchObservedRunningTime="2025-12-05 01:32:26.868182184 +0000 UTC m=+1445.244397545" Dec 05 01:32:26 crc kubenswrapper[4990]: I1205 01:32:26.869149 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-7rhp4" Dec 05 01:32:26 crc kubenswrapper[4990]: I1205 01:32:26.879357 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-64869d6796-xppnk" podStartSLOduration=6.87933786 podStartE2EDuration="6.87933786s" podCreationTimestamp="2025-12-05 01:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:32:26.869633196 +0000 UTC m=+1445.245848557" watchObservedRunningTime="2025-12-05 01:32:26.87933786 +0000 UTC m=+1445.255553221" Dec 05 01:32:26 crc kubenswrapper[4990]: I1205 01:32:26.890902 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6c5f858c6d-zxwsh" podStartSLOduration=6.890885488 podStartE2EDuration="6.890885488s" podCreationTimestamp="2025-12-05 01:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:32:26.888532421 +0000 UTC m=+1445.264747792" watchObservedRunningTime="2025-12-05 01:32:26.890885488 +0000 UTC m=+1445.267100849" Dec 05 01:32:26 crc kubenswrapper[4990]: I1205 01:32:26.907700 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-7rhp4" podStartSLOduration=6.907682834 podStartE2EDuration="6.907682834s" podCreationTimestamp="2025-12-05 01:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:32:26.906767478 +0000 UTC m=+1445.282982839" watchObservedRunningTime="2025-12-05 01:32:26.907682834 +0000 UTC m=+1445.283898185" Dec 05 01:32:26 crc kubenswrapper[4990]: I1205 01:32:26.930218 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7fdcd7bc79-skn69" podStartSLOduration=3.930194503 podStartE2EDuration="3.930194503s" podCreationTimestamp="2025-12-05 01:32:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:32:26.921402354 +0000 UTC m=+1445.297617705" watchObservedRunningTime="2025-12-05 01:32:26.930194503 +0000 UTC m=+1445.306409864" Dec 05 01:32:27 crc kubenswrapper[4990]: I1205 01:32:27.878579 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f7b19a4b-c55b-4845-ae65-09442bb0a29f","Type":"ContainerStarted","Data":"bf2fdceb0b6d06482f716e5de13b7c5667b3c943dfb1c3e9136a043bd5ec7bf3"} Dec 05 01:32:27 crc kubenswrapper[4990]: I1205 01:32:27.880709 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5dc9df8c96-j8dx7" event={"ID":"dd33dbb9-4e51-47db-8129-a93493234f7f","Type":"ContainerStarted","Data":"ddee66ac66bbe9676ade95661263c28f7cfb48141f52a3c8dcd54d952118736b"} Dec 05 01:32:27 crc kubenswrapper[4990]: I1205 01:32:27.911965 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.91194564 podStartE2EDuration="7.91194564s" podCreationTimestamp="2025-12-05 01:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:32:27.902120321 +0000 UTC m=+1446.278335682" watchObservedRunningTime="2025-12-05 01:32:27.91194564 +0000 UTC m=+1446.288161001" Dec 05 01:32:27 crc kubenswrapper[4990]: I1205 01:32:27.932343 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5dc9df8c96-j8dx7" podStartSLOduration=2.932325078 podStartE2EDuration="2.932325078s" podCreationTimestamp="2025-12-05 01:32:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:32:27.924202958 +0000 UTC m=+1446.300418339" watchObservedRunningTime="2025-12-05 01:32:27.932325078 +0000 UTC m=+1446.308540439" Dec 05 01:32:28 crc kubenswrapper[4990]: I1205 01:32:28.893781 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5dc9df8c96-j8dx7" Dec 05 01:32:29 crc kubenswrapper[4990]: I1205 01:32:29.979542 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 05 01:32:29 crc kubenswrapper[4990]: I1205 01:32:29.980020 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 05 01:32:30 crc kubenswrapper[4990]: I1205 01:32:30.023367 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 05 01:32:30 crc kubenswrapper[4990]: I1205 01:32:30.031517 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 05 01:32:30 crc kubenswrapper[4990]: I1205 01:32:30.911693 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 05 01:32:30 crc kubenswrapper[4990]: I1205 01:32:30.912024 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 05 01:32:31 crc kubenswrapper[4990]: I1205 01:32:31.095995 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 05 01:32:31 crc kubenswrapper[4990]: I1205 01:32:31.096055 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 05 01:32:31 crc kubenswrapper[4990]: I1205 01:32:31.129848 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 05 01:32:31 crc kubenswrapper[4990]: I1205 01:32:31.142710 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 05 01:32:31 crc kubenswrapper[4990]: I1205 01:32:31.301659 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-7rhp4" Dec 05 01:32:31 crc kubenswrapper[4990]: I1205 01:32:31.361979 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-r8v6b"] Dec 05 01:32:31 crc kubenswrapper[4990]: I1205 01:32:31.362217 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-r8v6b" podUID="f1a8d39e-13c0-476d-801a-21d79f4b3009" containerName="dnsmasq-dns" containerID="cri-o://debdff4e77cd7ef088d5b1c07f0dce5c7871cfdf001e627a8484c99f302671d2" gracePeriod=10 Dec 05 01:32:31 crc kubenswrapper[4990]: I1205 01:32:31.928637 4990 generic.go:334] "Generic (PLEG): container finished" podID="f1a8d39e-13c0-476d-801a-21d79f4b3009" containerID="debdff4e77cd7ef088d5b1c07f0dce5c7871cfdf001e627a8484c99f302671d2" exitCode=0 Dec 05 01:32:31 crc kubenswrapper[4990]: I1205 01:32:31.928724 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-r8v6b" event={"ID":"f1a8d39e-13c0-476d-801a-21d79f4b3009","Type":"ContainerDied","Data":"debdff4e77cd7ef088d5b1c07f0dce5c7871cfdf001e627a8484c99f302671d2"} Dec 05 01:32:31 crc kubenswrapper[4990]: I1205 01:32:31.948750 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 05 01:32:31 crc kubenswrapper[4990]: I1205 01:32:31.948782 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 05 01:32:32 crc kubenswrapper[4990]: I1205 01:32:32.938633 4990 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 01:32:32 crc kubenswrapper[4990]: I1205 01:32:32.982177 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 05 01:32:32 crc kubenswrapper[4990]: I1205 01:32:32.983218 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 05 01:32:33 crc kubenswrapper[4990]: I1205 01:32:33.927518 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 05 01:32:34 crc kubenswrapper[4990]: I1205 01:32:34.692849 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 05 01:32:34 crc kubenswrapper[4990]: I1205 01:32:34.913315 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-r8v6b" Dec 05 01:32:34 crc kubenswrapper[4990]: I1205 01:32:34.967932 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-r8v6b" event={"ID":"f1a8d39e-13c0-476d-801a-21d79f4b3009","Type":"ContainerDied","Data":"1ef1678fa28f2557ef84a6a74a590b8903e92e86bbb4cbb0fe10d93df69b8233"} Dec 05 01:32:34 crc kubenswrapper[4990]: I1205 01:32:34.968314 4990 scope.go:117] "RemoveContainer" containerID="debdff4e77cd7ef088d5b1c07f0dce5c7871cfdf001e627a8484c99f302671d2" Dec 05 01:32:34 crc kubenswrapper[4990]: I1205 01:32:34.968835 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-r8v6b" Dec 05 01:32:35 crc kubenswrapper[4990]: I1205 01:32:35.023020 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82x4s\" (UniqueName: \"kubernetes.io/projected/f1a8d39e-13c0-476d-801a-21d79f4b3009-kube-api-access-82x4s\") pod \"f1a8d39e-13c0-476d-801a-21d79f4b3009\" (UID: \"f1a8d39e-13c0-476d-801a-21d79f4b3009\") " Dec 05 01:32:35 crc kubenswrapper[4990]: I1205 01:32:35.023087 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1a8d39e-13c0-476d-801a-21d79f4b3009-ovsdbserver-sb\") pod \"f1a8d39e-13c0-476d-801a-21d79f4b3009\" (UID: \"f1a8d39e-13c0-476d-801a-21d79f4b3009\") " Dec 05 01:32:35 crc kubenswrapper[4990]: I1205 01:32:35.023129 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1a8d39e-13c0-476d-801a-21d79f4b3009-config\") pod \"f1a8d39e-13c0-476d-801a-21d79f4b3009\" (UID: \"f1a8d39e-13c0-476d-801a-21d79f4b3009\") " Dec 05 01:32:35 crc kubenswrapper[4990]: I1205 01:32:35.023190 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1a8d39e-13c0-476d-801a-21d79f4b3009-dns-swift-storage-0\") pod \"f1a8d39e-13c0-476d-801a-21d79f4b3009\" (UID: \"f1a8d39e-13c0-476d-801a-21d79f4b3009\") " Dec 05 01:32:35 crc kubenswrapper[4990]: I1205 01:32:35.023219 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1a8d39e-13c0-476d-801a-21d79f4b3009-ovsdbserver-nb\") pod \"f1a8d39e-13c0-476d-801a-21d79f4b3009\" (UID: \"f1a8d39e-13c0-476d-801a-21d79f4b3009\") " Dec 05 01:32:35 crc kubenswrapper[4990]: I1205 01:32:35.023268 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1a8d39e-13c0-476d-801a-21d79f4b3009-dns-svc\") pod \"f1a8d39e-13c0-476d-801a-21d79f4b3009\" (UID: \"f1a8d39e-13c0-476d-801a-21d79f4b3009\") " Dec 05 01:32:35 crc kubenswrapper[4990]: I1205 01:32:35.030897 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1a8d39e-13c0-476d-801a-21d79f4b3009-kube-api-access-82x4s" (OuterVolumeSpecName: "kube-api-access-82x4s") pod "f1a8d39e-13c0-476d-801a-21d79f4b3009" (UID: "f1a8d39e-13c0-476d-801a-21d79f4b3009"). InnerVolumeSpecName "kube-api-access-82x4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:32:35 crc kubenswrapper[4990]: I1205 01:32:35.083885 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1a8d39e-13c0-476d-801a-21d79f4b3009-config" (OuterVolumeSpecName: "config") pod "f1a8d39e-13c0-476d-801a-21d79f4b3009" (UID: "f1a8d39e-13c0-476d-801a-21d79f4b3009"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:32:35 crc kubenswrapper[4990]: I1205 01:32:35.089343 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1a8d39e-13c0-476d-801a-21d79f4b3009-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f1a8d39e-13c0-476d-801a-21d79f4b3009" (UID: "f1a8d39e-13c0-476d-801a-21d79f4b3009"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:32:35 crc kubenswrapper[4990]: I1205 01:32:35.092827 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1a8d39e-13c0-476d-801a-21d79f4b3009-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f1a8d39e-13c0-476d-801a-21d79f4b3009" (UID: "f1a8d39e-13c0-476d-801a-21d79f4b3009"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:32:35 crc kubenswrapper[4990]: I1205 01:32:35.095318 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1a8d39e-13c0-476d-801a-21d79f4b3009-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f1a8d39e-13c0-476d-801a-21d79f4b3009" (UID: "f1a8d39e-13c0-476d-801a-21d79f4b3009"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:32:35 crc kubenswrapper[4990]: I1205 01:32:35.104650 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1a8d39e-13c0-476d-801a-21d79f4b3009-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f1a8d39e-13c0-476d-801a-21d79f4b3009" (UID: "f1a8d39e-13c0-476d-801a-21d79f4b3009"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:32:35 crc kubenswrapper[4990]: I1205 01:32:35.125234 4990 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1a8d39e-13c0-476d-801a-21d79f4b3009-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:35 crc kubenswrapper[4990]: I1205 01:32:35.125259 4990 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1a8d39e-13c0-476d-801a-21d79f4b3009-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:35 crc kubenswrapper[4990]: I1205 01:32:35.125268 4990 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1a8d39e-13c0-476d-801a-21d79f4b3009-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:35 crc kubenswrapper[4990]: I1205 01:32:35.125278 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82x4s\" (UniqueName: \"kubernetes.io/projected/f1a8d39e-13c0-476d-801a-21d79f4b3009-kube-api-access-82x4s\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:35 crc kubenswrapper[4990]: I1205 01:32:35.125288 4990 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1a8d39e-13c0-476d-801a-21d79f4b3009-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:35 crc kubenswrapper[4990]: I1205 01:32:35.125296 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1a8d39e-13c0-476d-801a-21d79f4b3009-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:35 crc kubenswrapper[4990]: I1205 01:32:35.132263 4990 scope.go:117] "RemoveContainer" containerID="9173c51ad2edcf89d697431f605057c5fd46290698aa23de4087798cbe3ccfe7" Dec 05 01:32:35 crc kubenswrapper[4990]: I1205 01:32:35.309567 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-r8v6b"] Dec 05 01:32:35 crc kubenswrapper[4990]: I1205 01:32:35.320058 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-r8v6b"] Dec 05 01:32:35 crc kubenswrapper[4990]: E1205 01:32:35.409358 4990 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1a8d39e_13c0_476d_801a_21d79f4b3009.slice/crio-1ef1678fa28f2557ef84a6a74a590b8903e92e86bbb4cbb0fe10d93df69b8233\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1a8d39e_13c0_476d_801a_21d79f4b3009.slice\": RecentStats: unable to find data in memory cache]" Dec 05 01:32:35 crc kubenswrapper[4990]: I1205 01:32:35.942648 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1a8d39e-13c0-476d-801a-21d79f4b3009" path="/var/lib/kubelet/pods/f1a8d39e-13c0-476d-801a-21d79f4b3009/volumes" Dec 05 01:32:35 crc kubenswrapper[4990]: I1205 01:32:35.977564 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kzf4n" event={"ID":"d5be3dfc-61e4-495c-8b0b-22f417664a9c","Type":"ContainerStarted","Data":"47ee167b0a8a9940f0690f6d46577d988195ffd84ef5e376de9c87b35275956b"} Dec 05 01:32:35 crc kubenswrapper[4990]: I1205 01:32:35.982398 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67d8cea8-3def-4f61-838e-36ffae0c8705","Type":"ContainerStarted","Data":"6dc786f153beecc331a501ad19a100d4d1025988e608abb32b89d6ab7dd7aa1a"} Dec 05 01:32:35 crc kubenswrapper[4990]: I1205 01:32:35.982542 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="67d8cea8-3def-4f61-838e-36ffae0c8705" containerName="ceilometer-central-agent" containerID="cri-o://6e85fe47a7a6f9ef3e39d75c22d4f8b3891f3587a46157b30aa2c670468bd3fa" gracePeriod=30 Dec 05 01:32:35 crc kubenswrapper[4990]: I1205 01:32:35.982747 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 01:32:35 crc kubenswrapper[4990]: I1205 01:32:35.982800 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="67d8cea8-3def-4f61-838e-36ffae0c8705" containerName="proxy-httpd" containerID="cri-o://6dc786f153beecc331a501ad19a100d4d1025988e608abb32b89d6ab7dd7aa1a" gracePeriod=30 Dec 05 01:32:35 crc kubenswrapper[4990]: I1205 01:32:35.982859 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="67d8cea8-3def-4f61-838e-36ffae0c8705" containerName="sg-core" containerID="cri-o://e21c0a38ed8d9c45cefdb293f3ae8dd9cfc097ae4c78d7e392ae61375067595e" gracePeriod=30 Dec 05 01:32:35 crc kubenswrapper[4990]: I1205 01:32:35.982900 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="67d8cea8-3def-4f61-838e-36ffae0c8705" containerName="ceilometer-notification-agent" containerID="cri-o://49e3c4b998337a905dec9be39a02d602a7fbee657480dbb9698096fb943acc37" gracePeriod=30 Dec 05 01:32:36 crc kubenswrapper[4990]: I1205 01:32:36.001638 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2qddb" event={"ID":"1cf00e7d-d396-4719-b077-bd14781d8836","Type":"ContainerStarted","Data":"a6a0b8906f99b50879cb560fb51bbf9c450509d546577e95f54a1e3df9551cd6"} Dec 05 01:32:36 crc kubenswrapper[4990]: I1205 01:32:36.037143 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-kzf4n" podStartSLOduration=3.081315716 podStartE2EDuration="41.037119506s" podCreationTimestamp="2025-12-05 01:31:55 +0000 UTC" firstStartedPulling="2025-12-05 01:31:56.994249681 +0000 UTC m=+1415.370465042" lastFinishedPulling="2025-12-05 01:32:34.950053461 +0000 UTC m=+1453.326268832" observedRunningTime="2025-12-05 01:32:36.008317379 +0000 UTC m=+1454.384532780" watchObservedRunningTime="2025-12-05 01:32:36.037119506 +0000 UTC m=+1454.413334868" Dec 05 01:32:36 crc kubenswrapper[4990]: I1205 01:32:36.037393 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.884526593 podStartE2EDuration="41.037387484s" podCreationTimestamp="2025-12-05 01:31:55 +0000 UTC" firstStartedPulling="2025-12-05 01:31:56.78520108 +0000 UTC m=+1415.161416441" lastFinishedPulling="2025-12-05 01:32:34.938061971 +0000 UTC m=+1453.314277332" observedRunningTime="2025-12-05 01:32:36.034799941 +0000 UTC m=+1454.411015302" watchObservedRunningTime="2025-12-05 01:32:36.037387484 +0000 UTC m=+1454.413602845" Dec 05 01:32:36 crc kubenswrapper[4990]: I1205 01:32:36.059217 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-2qddb" podStartSLOduration=3.026331726 podStartE2EDuration="41.059199343s" podCreationTimestamp="2025-12-05 01:31:55 +0000 UTC" firstStartedPulling="2025-12-05 01:31:56.908806247 +0000 UTC m=+1415.285021608" lastFinishedPulling="2025-12-05 01:32:34.941673864 +0000 UTC m=+1453.317889225" observedRunningTime="2025-12-05 01:32:36.051506405 +0000 UTC m=+1454.427721796" watchObservedRunningTime="2025-12-05 01:32:36.059199343 +0000 UTC m=+1454.435414704" Dec 05 01:32:37 crc kubenswrapper[4990]: I1205 01:32:37.003069 4990 generic.go:334] "Generic (PLEG): container finished" podID="67d8cea8-3def-4f61-838e-36ffae0c8705" containerID="6dc786f153beecc331a501ad19a100d4d1025988e608abb32b89d6ab7dd7aa1a" exitCode=0 Dec 05 01:32:37 crc kubenswrapper[4990]: I1205 01:32:37.003443 4990 generic.go:334] "Generic (PLEG): container finished" podID="67d8cea8-3def-4f61-838e-36ffae0c8705" containerID="e21c0a38ed8d9c45cefdb293f3ae8dd9cfc097ae4c78d7e392ae61375067595e" exitCode=2 Dec 05 01:32:37 crc kubenswrapper[4990]: I1205 01:32:37.003455 4990 generic.go:334] "Generic (PLEG): container finished" podID="67d8cea8-3def-4f61-838e-36ffae0c8705" containerID="6e85fe47a7a6f9ef3e39d75c22d4f8b3891f3587a46157b30aa2c670468bd3fa" exitCode=0 Dec 05 01:32:37 crc kubenswrapper[4990]: I1205 01:32:37.003092 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67d8cea8-3def-4f61-838e-36ffae0c8705","Type":"ContainerDied","Data":"6dc786f153beecc331a501ad19a100d4d1025988e608abb32b89d6ab7dd7aa1a"} Dec 05 01:32:37 crc kubenswrapper[4990]: I1205 01:32:37.003567 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67d8cea8-3def-4f61-838e-36ffae0c8705","Type":"ContainerDied","Data":"e21c0a38ed8d9c45cefdb293f3ae8dd9cfc097ae4c78d7e392ae61375067595e"} Dec 05 01:32:37 crc kubenswrapper[4990]: I1205 01:32:37.003585 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67d8cea8-3def-4f61-838e-36ffae0c8705","Type":"ContainerDied","Data":"6e85fe47a7a6f9ef3e39d75c22d4f8b3891f3587a46157b30aa2c670468bd3fa"} Dec 05 01:32:38 crc kubenswrapper[4990]: I1205 01:32:38.013307 4990 generic.go:334] "Generic (PLEG): container finished" podID="d5be3dfc-61e4-495c-8b0b-22f417664a9c" containerID="47ee167b0a8a9940f0690f6d46577d988195ffd84ef5e376de9c87b35275956b" exitCode=0 Dec 05 01:32:38 crc kubenswrapper[4990]: I1205 01:32:38.013357 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kzf4n" event={"ID":"d5be3dfc-61e4-495c-8b0b-22f417664a9c","Type":"ContainerDied","Data":"47ee167b0a8a9940f0690f6d46577d988195ffd84ef5e376de9c87b35275956b"} Dec 05 01:32:39 crc kubenswrapper[4990]: I1205 01:32:39.480748 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kzf4n" Dec 05 01:32:39 crc kubenswrapper[4990]: I1205 01:32:39.606556 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5be3dfc-61e4-495c-8b0b-22f417664a9c-combined-ca-bundle\") pod \"d5be3dfc-61e4-495c-8b0b-22f417664a9c\" (UID: \"d5be3dfc-61e4-495c-8b0b-22f417664a9c\") " Dec 05 01:32:39 crc kubenswrapper[4990]: I1205 01:32:39.606635 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hlzk\" (UniqueName: \"kubernetes.io/projected/d5be3dfc-61e4-495c-8b0b-22f417664a9c-kube-api-access-4hlzk\") pod \"d5be3dfc-61e4-495c-8b0b-22f417664a9c\" (UID: \"d5be3dfc-61e4-495c-8b0b-22f417664a9c\") " Dec 05 01:32:39 crc kubenswrapper[4990]: I1205 01:32:39.606776 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5be3dfc-61e4-495c-8b0b-22f417664a9c-db-sync-config-data\") pod \"d5be3dfc-61e4-495c-8b0b-22f417664a9c\" (UID: \"d5be3dfc-61e4-495c-8b0b-22f417664a9c\") " Dec 05 01:32:39 crc kubenswrapper[4990]: I1205 01:32:39.612114 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5be3dfc-61e4-495c-8b0b-22f417664a9c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d5be3dfc-61e4-495c-8b0b-22f417664a9c" (UID: "d5be3dfc-61e4-495c-8b0b-22f417664a9c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:39 crc kubenswrapper[4990]: I1205 01:32:39.612308 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5be3dfc-61e4-495c-8b0b-22f417664a9c-kube-api-access-4hlzk" (OuterVolumeSpecName: "kube-api-access-4hlzk") pod "d5be3dfc-61e4-495c-8b0b-22f417664a9c" (UID: "d5be3dfc-61e4-495c-8b0b-22f417664a9c"). InnerVolumeSpecName "kube-api-access-4hlzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:32:39 crc kubenswrapper[4990]: I1205 01:32:39.632949 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5be3dfc-61e4-495c-8b0b-22f417664a9c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5be3dfc-61e4-495c-8b0b-22f417664a9c" (UID: "d5be3dfc-61e4-495c-8b0b-22f417664a9c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:39 crc kubenswrapper[4990]: I1205 01:32:39.708554 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5be3dfc-61e4-495c-8b0b-22f417664a9c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:39 crc kubenswrapper[4990]: I1205 01:32:39.708619 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hlzk\" (UniqueName: \"kubernetes.io/projected/d5be3dfc-61e4-495c-8b0b-22f417664a9c-kube-api-access-4hlzk\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:39 crc kubenswrapper[4990]: I1205 01:32:39.708637 4990 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5be3dfc-61e4-495c-8b0b-22f417664a9c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.039723 4990 generic.go:334] "Generic (PLEG): container finished" podID="1cf00e7d-d396-4719-b077-bd14781d8836" containerID="a6a0b8906f99b50879cb560fb51bbf9c450509d546577e95f54a1e3df9551cd6" exitCode=0 Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.039816 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2qddb" event={"ID":"1cf00e7d-d396-4719-b077-bd14781d8836","Type":"ContainerDied","Data":"a6a0b8906f99b50879cb560fb51bbf9c450509d546577e95f54a1e3df9551cd6"} Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.045139 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kzf4n" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.045298 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kzf4n" event={"ID":"d5be3dfc-61e4-495c-8b0b-22f417664a9c","Type":"ContainerDied","Data":"a8863e20559e14da0748ec57aaaa9174b9cb8bac8de2f651cfacc4189a87f18d"} Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.045331 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8863e20559e14da0748ec57aaaa9174b9cb8bac8de2f651cfacc4189a87f18d" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.061252 4990 generic.go:334] "Generic (PLEG): container finished" podID="67d8cea8-3def-4f61-838e-36ffae0c8705" containerID="49e3c4b998337a905dec9be39a02d602a7fbee657480dbb9698096fb943acc37" exitCode=0 Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.061305 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67d8cea8-3def-4f61-838e-36ffae0c8705","Type":"ContainerDied","Data":"49e3c4b998337a905dec9be39a02d602a7fbee657480dbb9698096fb943acc37"} Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.196186 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.323991 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67d8cea8-3def-4f61-838e-36ffae0c8705-combined-ca-bundle\") pod \"67d8cea8-3def-4f61-838e-36ffae0c8705\" (UID: \"67d8cea8-3def-4f61-838e-36ffae0c8705\") " Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.324088 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzgg4\" (UniqueName: \"kubernetes.io/projected/67d8cea8-3def-4f61-838e-36ffae0c8705-kube-api-access-kzgg4\") pod \"67d8cea8-3def-4f61-838e-36ffae0c8705\" (UID: \"67d8cea8-3def-4f61-838e-36ffae0c8705\") " Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.324188 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67d8cea8-3def-4f61-838e-36ffae0c8705-run-httpd\") pod \"67d8cea8-3def-4f61-838e-36ffae0c8705\" (UID: \"67d8cea8-3def-4f61-838e-36ffae0c8705\") " Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.324215 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/67d8cea8-3def-4f61-838e-36ffae0c8705-sg-core-conf-yaml\") pod \"67d8cea8-3def-4f61-838e-36ffae0c8705\" (UID: \"67d8cea8-3def-4f61-838e-36ffae0c8705\") " Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.324237 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67d8cea8-3def-4f61-838e-36ffae0c8705-config-data\") pod \"67d8cea8-3def-4f61-838e-36ffae0c8705\" (UID: \"67d8cea8-3def-4f61-838e-36ffae0c8705\") " Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.324276 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67d8cea8-3def-4f61-838e-36ffae0c8705-scripts\") pod \"67d8cea8-3def-4f61-838e-36ffae0c8705\" (UID: \"67d8cea8-3def-4f61-838e-36ffae0c8705\") " Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.324327 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67d8cea8-3def-4f61-838e-36ffae0c8705-log-httpd\") pod \"67d8cea8-3def-4f61-838e-36ffae0c8705\" (UID: \"67d8cea8-3def-4f61-838e-36ffae0c8705\") " Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.325163 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67d8cea8-3def-4f61-838e-36ffae0c8705-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "67d8cea8-3def-4f61-838e-36ffae0c8705" (UID: "67d8cea8-3def-4f61-838e-36ffae0c8705"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.328142 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67d8cea8-3def-4f61-838e-36ffae0c8705-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "67d8cea8-3def-4f61-838e-36ffae0c8705" (UID: "67d8cea8-3def-4f61-838e-36ffae0c8705"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.361675 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67d8cea8-3def-4f61-838e-36ffae0c8705-scripts" (OuterVolumeSpecName: "scripts") pod "67d8cea8-3def-4f61-838e-36ffae0c8705" (UID: "67d8cea8-3def-4f61-838e-36ffae0c8705"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.373835 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-557cdcfdf5-b7n8x"] Dec 05 01:32:40 crc kubenswrapper[4990]: E1205 01:32:40.374284 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1a8d39e-13c0-476d-801a-21d79f4b3009" containerName="init" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.374300 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1a8d39e-13c0-476d-801a-21d79f4b3009" containerName="init" Dec 05 01:32:40 crc kubenswrapper[4990]: E1205 01:32:40.374313 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5be3dfc-61e4-495c-8b0b-22f417664a9c" containerName="barbican-db-sync" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.374321 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5be3dfc-61e4-495c-8b0b-22f417664a9c" containerName="barbican-db-sync" Dec 05 01:32:40 crc kubenswrapper[4990]: E1205 01:32:40.374371 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67d8cea8-3def-4f61-838e-36ffae0c8705" containerName="sg-core" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.374380 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="67d8cea8-3def-4f61-838e-36ffae0c8705" containerName="sg-core" Dec 05 01:32:40 crc kubenswrapper[4990]: E1205 01:32:40.374398 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1a8d39e-13c0-476d-801a-21d79f4b3009" containerName="dnsmasq-dns" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.374405 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1a8d39e-13c0-476d-801a-21d79f4b3009" containerName="dnsmasq-dns" Dec 05 01:32:40 crc kubenswrapper[4990]: E1205 01:32:40.374423 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67d8cea8-3def-4f61-838e-36ffae0c8705" containerName="proxy-httpd" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.374430 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="67d8cea8-3def-4f61-838e-36ffae0c8705" containerName="proxy-httpd" Dec 05 01:32:40 crc kubenswrapper[4990]: E1205 01:32:40.374447 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67d8cea8-3def-4f61-838e-36ffae0c8705" containerName="ceilometer-central-agent" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.374457 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="67d8cea8-3def-4f61-838e-36ffae0c8705" containerName="ceilometer-central-agent" Dec 05 01:32:40 crc kubenswrapper[4990]: E1205 01:32:40.374499 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67d8cea8-3def-4f61-838e-36ffae0c8705" containerName="ceilometer-notification-agent" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.374509 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="67d8cea8-3def-4f61-838e-36ffae0c8705" containerName="ceilometer-notification-agent" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.374700 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1a8d39e-13c0-476d-801a-21d79f4b3009" containerName="dnsmasq-dns" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.374718 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="67d8cea8-3def-4f61-838e-36ffae0c8705" containerName="proxy-httpd" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.374733 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5be3dfc-61e4-495c-8b0b-22f417664a9c" containerName="barbican-db-sync" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.374749 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="67d8cea8-3def-4f61-838e-36ffae0c8705" containerName="ceilometer-notification-agent" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.374763 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="67d8cea8-3def-4f61-838e-36ffae0c8705" containerName="ceilometer-central-agent" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.374772 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="67d8cea8-3def-4f61-838e-36ffae0c8705" containerName="sg-core" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.381032 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67d8cea8-3def-4f61-838e-36ffae0c8705-kube-api-access-kzgg4" (OuterVolumeSpecName: "kube-api-access-kzgg4") pod "67d8cea8-3def-4f61-838e-36ffae0c8705" (UID: "67d8cea8-3def-4f61-838e-36ffae0c8705"). InnerVolumeSpecName "kube-api-access-kzgg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.389117 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-557cdcfdf5-b7n8x" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.403138 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.403958 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-tvmcj" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.404199 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.424149 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-557cdcfdf5-b7n8x"] Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.437904 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba876d22-269d-46e3-8a91-24c8646d1c75-config-data\") pod \"barbican-worker-557cdcfdf5-b7n8x\" (UID: \"ba876d22-269d-46e3-8a91-24c8646d1c75\") " pod="openstack/barbican-worker-557cdcfdf5-b7n8x" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.437942 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba876d22-269d-46e3-8a91-24c8646d1c75-combined-ca-bundle\") pod \"barbican-worker-557cdcfdf5-b7n8x\" (UID: \"ba876d22-269d-46e3-8a91-24c8646d1c75\") " pod="openstack/barbican-worker-557cdcfdf5-b7n8x" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.437992 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba876d22-269d-46e3-8a91-24c8646d1c75-logs\") pod \"barbican-worker-557cdcfdf5-b7n8x\" (UID: \"ba876d22-269d-46e3-8a91-24c8646d1c75\") " pod="openstack/barbican-worker-557cdcfdf5-b7n8x" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.438016 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gffvb\" (UniqueName: \"kubernetes.io/projected/ba876d22-269d-46e3-8a91-24c8646d1c75-kube-api-access-gffvb\") pod \"barbican-worker-557cdcfdf5-b7n8x\" (UID: \"ba876d22-269d-46e3-8a91-24c8646d1c75\") " pod="openstack/barbican-worker-557cdcfdf5-b7n8x" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.438054 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba876d22-269d-46e3-8a91-24c8646d1c75-config-data-custom\") pod \"barbican-worker-557cdcfdf5-b7n8x\" (UID: \"ba876d22-269d-46e3-8a91-24c8646d1c75\") " pod="openstack/barbican-worker-557cdcfdf5-b7n8x" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.438125 4990 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67d8cea8-3def-4f61-838e-36ffae0c8705-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.438136 4990 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67d8cea8-3def-4f61-838e-36ffae0c8705-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.438157 4990 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67d8cea8-3def-4f61-838e-36ffae0c8705-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.438165 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzgg4\" (UniqueName: \"kubernetes.io/projected/67d8cea8-3def-4f61-838e-36ffae0c8705-kube-api-access-kzgg4\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.472090 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67d8cea8-3def-4f61-838e-36ffae0c8705-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "67d8cea8-3def-4f61-838e-36ffae0c8705" (UID: "67d8cea8-3def-4f61-838e-36ffae0c8705"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.527413 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-67b9d4ffcb-hc2dk"] Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.539423 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba876d22-269d-46e3-8a91-24c8646d1c75-config-data-custom\") pod \"barbican-worker-557cdcfdf5-b7n8x\" (UID: \"ba876d22-269d-46e3-8a91-24c8646d1c75\") " pod="openstack/barbican-worker-557cdcfdf5-b7n8x" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.539510 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-67b9d4ffcb-hc2dk" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.539521 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba876d22-269d-46e3-8a91-24c8646d1c75-config-data\") pod \"barbican-worker-557cdcfdf5-b7n8x\" (UID: \"ba876d22-269d-46e3-8a91-24c8646d1c75\") " pod="openstack/barbican-worker-557cdcfdf5-b7n8x" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.539912 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba876d22-269d-46e3-8a91-24c8646d1c75-combined-ca-bundle\") pod \"barbican-worker-557cdcfdf5-b7n8x\" (UID: \"ba876d22-269d-46e3-8a91-24c8646d1c75\") " pod="openstack/barbican-worker-557cdcfdf5-b7n8x" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.539956 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba876d22-269d-46e3-8a91-24c8646d1c75-logs\") pod \"barbican-worker-557cdcfdf5-b7n8x\" (UID: \"ba876d22-269d-46e3-8a91-24c8646d1c75\") " pod="openstack/barbican-worker-557cdcfdf5-b7n8x" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.539982 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gffvb\" (UniqueName: \"kubernetes.io/projected/ba876d22-269d-46e3-8a91-24c8646d1c75-kube-api-access-gffvb\") pod \"barbican-worker-557cdcfdf5-b7n8x\" (UID: \"ba876d22-269d-46e3-8a91-24c8646d1c75\") " pod="openstack/barbican-worker-557cdcfdf5-b7n8x" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.540028 4990 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/67d8cea8-3def-4f61-838e-36ffae0c8705-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.540527 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba876d22-269d-46e3-8a91-24c8646d1c75-logs\") pod \"barbican-worker-557cdcfdf5-b7n8x\" (UID: \"ba876d22-269d-46e3-8a91-24c8646d1c75\") " pod="openstack/barbican-worker-557cdcfdf5-b7n8x" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.554384 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba876d22-269d-46e3-8a91-24c8646d1c75-config-data-custom\") pod \"barbican-worker-557cdcfdf5-b7n8x\" (UID: \"ba876d22-269d-46e3-8a91-24c8646d1c75\") " pod="openstack/barbican-worker-557cdcfdf5-b7n8x" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.555744 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba876d22-269d-46e3-8a91-24c8646d1c75-config-data\") pod \"barbican-worker-557cdcfdf5-b7n8x\" (UID: \"ba876d22-269d-46e3-8a91-24c8646d1c75\") " pod="openstack/barbican-worker-557cdcfdf5-b7n8x" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.560872 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.569390 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba876d22-269d-46e3-8a91-24c8646d1c75-combined-ca-bundle\") pod \"barbican-worker-557cdcfdf5-b7n8x\" (UID: \"ba876d22-269d-46e3-8a91-24c8646d1c75\") " pod="openstack/barbican-worker-557cdcfdf5-b7n8x" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.569835 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67d8cea8-3def-4f61-838e-36ffae0c8705-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67d8cea8-3def-4f61-838e-36ffae0c8705" (UID: "67d8cea8-3def-4f61-838e-36ffae0c8705"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.576213 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gffvb\" (UniqueName: \"kubernetes.io/projected/ba876d22-269d-46e3-8a91-24c8646d1c75-kube-api-access-gffvb\") pod \"barbican-worker-557cdcfdf5-b7n8x\" (UID: \"ba876d22-269d-46e3-8a91-24c8646d1c75\") " pod="openstack/barbican-worker-557cdcfdf5-b7n8x" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.576286 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-xs8gq"] Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.578053 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-xs8gq" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.587597 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-67b9d4ffcb-hc2dk"] Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.602389 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-xs8gq"] Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.608043 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-557cdcfdf5-b7n8x" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.643185 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2npk4\" (UniqueName: \"kubernetes.io/projected/63e835fa-b411-4a26-b1ca-b1ab627e8269-kube-api-access-2npk4\") pod \"dnsmasq-dns-85ff748b95-xs8gq\" (UID: \"63e835fa-b411-4a26-b1ca-b1ab627e8269\") " pod="openstack/dnsmasq-dns-85ff748b95-xs8gq" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.643241 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fecef393-81c1-4d16-af9e-3d777782dd2f-config-data-custom\") pod \"barbican-keystone-listener-67b9d4ffcb-hc2dk\" (UID: \"fecef393-81c1-4d16-af9e-3d777782dd2f\") " pod="openstack/barbican-keystone-listener-67b9d4ffcb-hc2dk" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.643290 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63e835fa-b411-4a26-b1ca-b1ab627e8269-dns-svc\") pod \"dnsmasq-dns-85ff748b95-xs8gq\" (UID: \"63e835fa-b411-4a26-b1ca-b1ab627e8269\") " pod="openstack/dnsmasq-dns-85ff748b95-xs8gq" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.643309 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63e835fa-b411-4a26-b1ca-b1ab627e8269-config\") pod \"dnsmasq-dns-85ff748b95-xs8gq\" (UID: \"63e835fa-b411-4a26-b1ca-b1ab627e8269\") " pod="openstack/dnsmasq-dns-85ff748b95-xs8gq" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.643327 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fecef393-81c1-4d16-af9e-3d777782dd2f-combined-ca-bundle\") pod \"barbican-keystone-listener-67b9d4ffcb-hc2dk\" (UID: \"fecef393-81c1-4d16-af9e-3d777782dd2f\") " pod="openstack/barbican-keystone-listener-67b9d4ffcb-hc2dk" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.643341 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fecef393-81c1-4d16-af9e-3d777782dd2f-config-data\") pod \"barbican-keystone-listener-67b9d4ffcb-hc2dk\" (UID: \"fecef393-81c1-4d16-af9e-3d777782dd2f\") " pod="openstack/barbican-keystone-listener-67b9d4ffcb-hc2dk" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.643357 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pdk8\" (UniqueName: \"kubernetes.io/projected/fecef393-81c1-4d16-af9e-3d777782dd2f-kube-api-access-9pdk8\") pod \"barbican-keystone-listener-67b9d4ffcb-hc2dk\" (UID: \"fecef393-81c1-4d16-af9e-3d777782dd2f\") " pod="openstack/barbican-keystone-listener-67b9d4ffcb-hc2dk" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.643377 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fecef393-81c1-4d16-af9e-3d777782dd2f-logs\") pod \"barbican-keystone-listener-67b9d4ffcb-hc2dk\" (UID: \"fecef393-81c1-4d16-af9e-3d777782dd2f\") " pod="openstack/barbican-keystone-listener-67b9d4ffcb-hc2dk" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.643447 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63e835fa-b411-4a26-b1ca-b1ab627e8269-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-xs8gq\" (UID: \"63e835fa-b411-4a26-b1ca-b1ab627e8269\") " pod="openstack/dnsmasq-dns-85ff748b95-xs8gq" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.643500 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63e835fa-b411-4a26-b1ca-b1ab627e8269-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-xs8gq\" (UID: \"63e835fa-b411-4a26-b1ca-b1ab627e8269\") " pod="openstack/dnsmasq-dns-85ff748b95-xs8gq" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.643524 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/63e835fa-b411-4a26-b1ca-b1ab627e8269-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-xs8gq\" (UID: \"63e835fa-b411-4a26-b1ca-b1ab627e8269\") " pod="openstack/dnsmasq-dns-85ff748b95-xs8gq" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.644190 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67d8cea8-3def-4f61-838e-36ffae0c8705-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.645474 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67d8cea8-3def-4f61-838e-36ffae0c8705-config-data" (OuterVolumeSpecName: "config-data") pod "67d8cea8-3def-4f61-838e-36ffae0c8705" (UID: "67d8cea8-3def-4f61-838e-36ffae0c8705"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.745063 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5bf5fbf9fd-qjhcn"] Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.745350 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63e835fa-b411-4a26-b1ca-b1ab627e8269-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-xs8gq\" (UID: \"63e835fa-b411-4a26-b1ca-b1ab627e8269\") " pod="openstack/dnsmasq-dns-85ff748b95-xs8gq" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.745401 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63e835fa-b411-4a26-b1ca-b1ab627e8269-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-xs8gq\" (UID: \"63e835fa-b411-4a26-b1ca-b1ab627e8269\") " pod="openstack/dnsmasq-dns-85ff748b95-xs8gq" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.745432 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/63e835fa-b411-4a26-b1ca-b1ab627e8269-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-xs8gq\" (UID: \"63e835fa-b411-4a26-b1ca-b1ab627e8269\") " pod="openstack/dnsmasq-dns-85ff748b95-xs8gq" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.745469 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2npk4\" (UniqueName: \"kubernetes.io/projected/63e835fa-b411-4a26-b1ca-b1ab627e8269-kube-api-access-2npk4\") pod \"dnsmasq-dns-85ff748b95-xs8gq\" (UID: \"63e835fa-b411-4a26-b1ca-b1ab627e8269\") " pod="openstack/dnsmasq-dns-85ff748b95-xs8gq" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.745515 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fecef393-81c1-4d16-af9e-3d777782dd2f-config-data-custom\") pod \"barbican-keystone-listener-67b9d4ffcb-hc2dk\" (UID: \"fecef393-81c1-4d16-af9e-3d777782dd2f\") " pod="openstack/barbican-keystone-listener-67b9d4ffcb-hc2dk" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.745560 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63e835fa-b411-4a26-b1ca-b1ab627e8269-dns-svc\") pod \"dnsmasq-dns-85ff748b95-xs8gq\" (UID: \"63e835fa-b411-4a26-b1ca-b1ab627e8269\") " pod="openstack/dnsmasq-dns-85ff748b95-xs8gq" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.745579 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63e835fa-b411-4a26-b1ca-b1ab627e8269-config\") pod \"dnsmasq-dns-85ff748b95-xs8gq\" (UID: \"63e835fa-b411-4a26-b1ca-b1ab627e8269\") " pod="openstack/dnsmasq-dns-85ff748b95-xs8gq" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.745598 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fecef393-81c1-4d16-af9e-3d777782dd2f-combined-ca-bundle\") pod \"barbican-keystone-listener-67b9d4ffcb-hc2dk\" (UID: \"fecef393-81c1-4d16-af9e-3d777782dd2f\") " pod="openstack/barbican-keystone-listener-67b9d4ffcb-hc2dk" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.745615 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fecef393-81c1-4d16-af9e-3d777782dd2f-config-data\") pod \"barbican-keystone-listener-67b9d4ffcb-hc2dk\" (UID: \"fecef393-81c1-4d16-af9e-3d777782dd2f\") " pod="openstack/barbican-keystone-listener-67b9d4ffcb-hc2dk" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.745632 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pdk8\" (UniqueName: \"kubernetes.io/projected/fecef393-81c1-4d16-af9e-3d777782dd2f-kube-api-access-9pdk8\") pod \"barbican-keystone-listener-67b9d4ffcb-hc2dk\" (UID: \"fecef393-81c1-4d16-af9e-3d777782dd2f\") " pod="openstack/barbican-keystone-listener-67b9d4ffcb-hc2dk" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.745649 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fecef393-81c1-4d16-af9e-3d777782dd2f-logs\") pod \"barbican-keystone-listener-67b9d4ffcb-hc2dk\" (UID: \"fecef393-81c1-4d16-af9e-3d777782dd2f\") " pod="openstack/barbican-keystone-listener-67b9d4ffcb-hc2dk" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.745711 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67d8cea8-3def-4f61-838e-36ffae0c8705-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.746082 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fecef393-81c1-4d16-af9e-3d777782dd2f-logs\") pod \"barbican-keystone-listener-67b9d4ffcb-hc2dk\" (UID: \"fecef393-81c1-4d16-af9e-3d777782dd2f\") " pod="openstack/barbican-keystone-listener-67b9d4ffcb-hc2dk" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.746397 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/63e835fa-b411-4a26-b1ca-b1ab627e8269-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-xs8gq\" (UID: \"63e835fa-b411-4a26-b1ca-b1ab627e8269\") " pod="openstack/dnsmasq-dns-85ff748b95-xs8gq" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.746720 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5bf5fbf9fd-qjhcn" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.746926 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63e835fa-b411-4a26-b1ca-b1ab627e8269-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-xs8gq\" (UID: \"63e835fa-b411-4a26-b1ca-b1ab627e8269\") " pod="openstack/dnsmasq-dns-85ff748b95-xs8gq" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.747422 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63e835fa-b411-4a26-b1ca-b1ab627e8269-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-xs8gq\" (UID: \"63e835fa-b411-4a26-b1ca-b1ab627e8269\") " pod="openstack/dnsmasq-dns-85ff748b95-xs8gq" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.750773 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.757244 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fecef393-81c1-4d16-af9e-3d777782dd2f-config-data-custom\") pod \"barbican-keystone-listener-67b9d4ffcb-hc2dk\" (UID: \"fecef393-81c1-4d16-af9e-3d777782dd2f\") " pod="openstack/barbican-keystone-listener-67b9d4ffcb-hc2dk" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.763131 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63e835fa-b411-4a26-b1ca-b1ab627e8269-config\") pod \"dnsmasq-dns-85ff748b95-xs8gq\" (UID: \"63e835fa-b411-4a26-b1ca-b1ab627e8269\") " pod="openstack/dnsmasq-dns-85ff748b95-xs8gq" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.763230 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63e835fa-b411-4a26-b1ca-b1ab627e8269-dns-svc\") pod \"dnsmasq-dns-85ff748b95-xs8gq\" (UID: \"63e835fa-b411-4a26-b1ca-b1ab627e8269\") " pod="openstack/dnsmasq-dns-85ff748b95-xs8gq" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.769183 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2npk4\" (UniqueName: \"kubernetes.io/projected/63e835fa-b411-4a26-b1ca-b1ab627e8269-kube-api-access-2npk4\") pod \"dnsmasq-dns-85ff748b95-xs8gq\" (UID: \"63e835fa-b411-4a26-b1ca-b1ab627e8269\") " pod="openstack/dnsmasq-dns-85ff748b95-xs8gq" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.776627 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5bf5fbf9fd-qjhcn"] Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.777236 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fecef393-81c1-4d16-af9e-3d777782dd2f-combined-ca-bundle\") pod \"barbican-keystone-listener-67b9d4ffcb-hc2dk\" (UID: \"fecef393-81c1-4d16-af9e-3d777782dd2f\") " pod="openstack/barbican-keystone-listener-67b9d4ffcb-hc2dk" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.779017 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pdk8\" (UniqueName: \"kubernetes.io/projected/fecef393-81c1-4d16-af9e-3d777782dd2f-kube-api-access-9pdk8\") pod \"barbican-keystone-listener-67b9d4ffcb-hc2dk\" (UID: \"fecef393-81c1-4d16-af9e-3d777782dd2f\") " pod="openstack/barbican-keystone-listener-67b9d4ffcb-hc2dk" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.783438 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fecef393-81c1-4d16-af9e-3d777782dd2f-config-data\") pod \"barbican-keystone-listener-67b9d4ffcb-hc2dk\" (UID: \"fecef393-81c1-4d16-af9e-3d777782dd2f\") " pod="openstack/barbican-keystone-listener-67b9d4ffcb-hc2dk" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.847671 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac-config-data\") pod \"barbican-api-5bf5fbf9fd-qjhcn\" (UID: \"e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac\") " pod="openstack/barbican-api-5bf5fbf9fd-qjhcn" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.847722 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvz95\" (UniqueName: \"kubernetes.io/projected/e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac-kube-api-access-kvz95\") pod \"barbican-api-5bf5fbf9fd-qjhcn\" (UID: \"e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac\") " pod="openstack/barbican-api-5bf5fbf9fd-qjhcn" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.847751 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac-combined-ca-bundle\") pod \"barbican-api-5bf5fbf9fd-qjhcn\" (UID: \"e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac\") " pod="openstack/barbican-api-5bf5fbf9fd-qjhcn" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.847788 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac-config-data-custom\") pod \"barbican-api-5bf5fbf9fd-qjhcn\" (UID: \"e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac\") " pod="openstack/barbican-api-5bf5fbf9fd-qjhcn" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.847843 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac-logs\") pod \"barbican-api-5bf5fbf9fd-qjhcn\" (UID: \"e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac\") " pod="openstack/barbican-api-5bf5fbf9fd-qjhcn" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.922999 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-67b9d4ffcb-hc2dk" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.937685 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-xs8gq" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.948994 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvz95\" (UniqueName: \"kubernetes.io/projected/e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac-kube-api-access-kvz95\") pod \"barbican-api-5bf5fbf9fd-qjhcn\" (UID: \"e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac\") " pod="openstack/barbican-api-5bf5fbf9fd-qjhcn" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.949057 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac-combined-ca-bundle\") pod \"barbican-api-5bf5fbf9fd-qjhcn\" (UID: \"e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac\") " pod="openstack/barbican-api-5bf5fbf9fd-qjhcn" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.949119 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac-config-data-custom\") pod \"barbican-api-5bf5fbf9fd-qjhcn\" (UID: \"e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac\") " pod="openstack/barbican-api-5bf5fbf9fd-qjhcn" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.949218 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac-logs\") pod \"barbican-api-5bf5fbf9fd-qjhcn\" (UID: \"e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac\") " pod="openstack/barbican-api-5bf5fbf9fd-qjhcn" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.949347 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac-config-data\") pod \"barbican-api-5bf5fbf9fd-qjhcn\" (UID: \"e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac\") " pod="openstack/barbican-api-5bf5fbf9fd-qjhcn" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.950200 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac-logs\") pod \"barbican-api-5bf5fbf9fd-qjhcn\" (UID: \"e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac\") " pod="openstack/barbican-api-5bf5fbf9fd-qjhcn" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.952368 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac-combined-ca-bundle\") pod \"barbican-api-5bf5fbf9fd-qjhcn\" (UID: \"e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac\") " pod="openstack/barbican-api-5bf5fbf9fd-qjhcn" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.953892 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac-config-data-custom\") pod \"barbican-api-5bf5fbf9fd-qjhcn\" (UID: \"e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac\") " pod="openstack/barbican-api-5bf5fbf9fd-qjhcn" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.958360 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac-config-data\") pod \"barbican-api-5bf5fbf9fd-qjhcn\" (UID: \"e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac\") " pod="openstack/barbican-api-5bf5fbf9fd-qjhcn" Dec 05 01:32:40 crc kubenswrapper[4990]: I1205 01:32:40.973640 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvz95\" (UniqueName: \"kubernetes.io/projected/e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac-kube-api-access-kvz95\") pod \"barbican-api-5bf5fbf9fd-qjhcn\" (UID: \"e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac\") " pod="openstack/barbican-api-5bf5fbf9fd-qjhcn" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.079216 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.079214 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67d8cea8-3def-4f61-838e-36ffae0c8705","Type":"ContainerDied","Data":"8319b58a4db1dea607f2727f0af880660646dd60aea4146efebee8d39423a88c"} Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.079685 4990 scope.go:117] "RemoveContainer" containerID="6dc786f153beecc331a501ad19a100d4d1025988e608abb32b89d6ab7dd7aa1a" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.095245 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5bf5fbf9fd-qjhcn" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.141740 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.160231 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.166543 4990 scope.go:117] "RemoveContainer" containerID="e21c0a38ed8d9c45cefdb293f3ae8dd9cfc097ae4c78d7e392ae61375067595e" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.167542 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.170749 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.173847 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.175654 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.180918 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.188412 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-557cdcfdf5-b7n8x"] Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.202408 4990 scope.go:117] "RemoveContainer" containerID="49e3c4b998337a905dec9be39a02d602a7fbee657480dbb9698096fb943acc37" Dec 05 01:32:41 crc kubenswrapper[4990]: W1205 01:32:41.211214 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba876d22_269d_46e3_8a91_24c8646d1c75.slice/crio-4a026481cbb4f1521571d3ccaddde62af575ca56c76ba546208570b5cc6e8417 WatchSource:0}: Error finding container 4a026481cbb4f1521571d3ccaddde62af575ca56c76ba546208570b5cc6e8417: Status 404 returned error can't find the container with id 4a026481cbb4f1521571d3ccaddde62af575ca56c76ba546208570b5cc6e8417 Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.224674 4990 scope.go:117] "RemoveContainer" containerID="6e85fe47a7a6f9ef3e39d75c22d4f8b3891f3587a46157b30aa2c670468bd3fa" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.254252 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\") " pod="openstack/ceilometer-0" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.254325 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-log-httpd\") pod \"ceilometer-0\" (UID: \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\") " pod="openstack/ceilometer-0" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.254349 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8bl5\" (UniqueName: \"kubernetes.io/projected/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-kube-api-access-t8bl5\") pod \"ceilometer-0\" (UID: \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\") " pod="openstack/ceilometer-0" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.254372 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-config-data\") pod \"ceilometer-0\" (UID: \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\") " pod="openstack/ceilometer-0" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.254419 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-run-httpd\") pod \"ceilometer-0\" (UID: \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\") " pod="openstack/ceilometer-0" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.254455 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-scripts\") pod \"ceilometer-0\" (UID: \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\") " pod="openstack/ceilometer-0" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.254515 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\") " pod="openstack/ceilometer-0" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.356692 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-run-httpd\") pod \"ceilometer-0\" (UID: \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\") " pod="openstack/ceilometer-0" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.356994 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-scripts\") pod \"ceilometer-0\" (UID: \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\") " pod="openstack/ceilometer-0" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.357057 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\") " pod="openstack/ceilometer-0" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.357114 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\") " pod="openstack/ceilometer-0" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.357138 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-log-httpd\") pod \"ceilometer-0\" (UID: \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\") " pod="openstack/ceilometer-0" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.357159 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8bl5\" (UniqueName: \"kubernetes.io/projected/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-kube-api-access-t8bl5\") pod \"ceilometer-0\" (UID: \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\") " pod="openstack/ceilometer-0" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.357194 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-config-data\") pod \"ceilometer-0\" (UID: \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\") " pod="openstack/ceilometer-0" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.357980 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-log-httpd\") pod \"ceilometer-0\" (UID: \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\") " pod="openstack/ceilometer-0" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.358029 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-run-httpd\") pod \"ceilometer-0\" (UID: \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\") " pod="openstack/ceilometer-0" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.362992 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\") " pod="openstack/ceilometer-0" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.364146 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-scripts\") pod \"ceilometer-0\" (UID: \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\") " pod="openstack/ceilometer-0" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.368350 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\") " pod="openstack/ceilometer-0" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.369197 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-config-data\") pod \"ceilometer-0\" (UID: \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\") " pod="openstack/ceilometer-0" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.383219 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8bl5\" (UniqueName: \"kubernetes.io/projected/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-kube-api-access-t8bl5\") pod \"ceilometer-0\" (UID: \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\") " pod="openstack/ceilometer-0" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.494507 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-67b9d4ffcb-hc2dk"] Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.510934 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.530035 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-xs8gq"] Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.652456 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2qddb" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.666422 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cf00e7d-d396-4719-b077-bd14781d8836-config-data\") pod \"1cf00e7d-d396-4719-b077-bd14781d8836\" (UID: \"1cf00e7d-d396-4719-b077-bd14781d8836\") " Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.666522 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cf00e7d-d396-4719-b077-bd14781d8836-scripts\") pod \"1cf00e7d-d396-4719-b077-bd14781d8836\" (UID: \"1cf00e7d-d396-4719-b077-bd14781d8836\") " Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.666542 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1cf00e7d-d396-4719-b077-bd14781d8836-db-sync-config-data\") pod \"1cf00e7d-d396-4719-b077-bd14781d8836\" (UID: \"1cf00e7d-d396-4719-b077-bd14781d8836\") " Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.666627 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cf00e7d-d396-4719-b077-bd14781d8836-combined-ca-bundle\") pod \"1cf00e7d-d396-4719-b077-bd14781d8836\" (UID: \"1cf00e7d-d396-4719-b077-bd14781d8836\") " Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.666717 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1cf00e7d-d396-4719-b077-bd14781d8836-etc-machine-id\") pod \"1cf00e7d-d396-4719-b077-bd14781d8836\" (UID: \"1cf00e7d-d396-4719-b077-bd14781d8836\") " Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.666765 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlqr\" (UniqueName: \"kubernetes.io/projected/1cf00e7d-d396-4719-b077-bd14781d8836-kube-api-access-ddlqr\") pod \"1cf00e7d-d396-4719-b077-bd14781d8836\" (UID: \"1cf00e7d-d396-4719-b077-bd14781d8836\") " Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.670651 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cf00e7d-d396-4719-b077-bd14781d8836-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1cf00e7d-d396-4719-b077-bd14781d8836" (UID: "1cf00e7d-d396-4719-b077-bd14781d8836"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.676132 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cf00e7d-d396-4719-b077-bd14781d8836-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1cf00e7d-d396-4719-b077-bd14781d8836" (UID: "1cf00e7d-d396-4719-b077-bd14781d8836"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.681012 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cf00e7d-d396-4719-b077-bd14781d8836-kube-api-access-ddlqr" (OuterVolumeSpecName: "kube-api-access-ddlqr") pod "1cf00e7d-d396-4719-b077-bd14781d8836" (UID: "1cf00e7d-d396-4719-b077-bd14781d8836"). InnerVolumeSpecName "kube-api-access-ddlqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.683908 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cf00e7d-d396-4719-b077-bd14781d8836-scripts" (OuterVolumeSpecName: "scripts") pod "1cf00e7d-d396-4719-b077-bd14781d8836" (UID: "1cf00e7d-d396-4719-b077-bd14781d8836"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.721948 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cf00e7d-d396-4719-b077-bd14781d8836-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1cf00e7d-d396-4719-b077-bd14781d8836" (UID: "1cf00e7d-d396-4719-b077-bd14781d8836"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.733831 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5bf5fbf9fd-qjhcn"] Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.772905 4990 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1cf00e7d-d396-4719-b077-bd14781d8836-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.773150 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddlqr\" (UniqueName: \"kubernetes.io/projected/1cf00e7d-d396-4719-b077-bd14781d8836-kube-api-access-ddlqr\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.773164 4990 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cf00e7d-d396-4719-b077-bd14781d8836-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.773173 4990 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1cf00e7d-d396-4719-b077-bd14781d8836-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.773185 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cf00e7d-d396-4719-b077-bd14781d8836-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.776845 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cf00e7d-d396-4719-b077-bd14781d8836-config-data" (OuterVolumeSpecName: "config-data") pod "1cf00e7d-d396-4719-b077-bd14781d8836" (UID: "1cf00e7d-d396-4719-b077-bd14781d8836"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.874037 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cf00e7d-d396-4719-b077-bd14781d8836-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:41 crc kubenswrapper[4990]: I1205 01:32:41.941319 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67d8cea8-3def-4f61-838e-36ffae0c8705" path="/var/lib/kubelet/pods/67d8cea8-3def-4f61-838e-36ffae0c8705/volumes" Dec 05 01:32:42 crc kubenswrapper[4990]: W1205 01:32:42.042874 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cc540f0_e077_4687_b2e4_5a0c268ce4a6.slice/crio-f3169f2e01edf23db56218486e86c0abbc6823bfc1ce477a4a7e3645e10a34ba WatchSource:0}: Error finding container f3169f2e01edf23db56218486e86c0abbc6823bfc1ce477a4a7e3645e10a34ba: Status 404 returned error can't find the container with id f3169f2e01edf23db56218486e86c0abbc6823bfc1ce477a4a7e3645e10a34ba Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.046845 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.097779 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2cc540f0-e077-4687-b2e4-5a0c268ce4a6","Type":"ContainerStarted","Data":"f3169f2e01edf23db56218486e86c0abbc6823bfc1ce477a4a7e3645e10a34ba"} Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.099258 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-557cdcfdf5-b7n8x" event={"ID":"ba876d22-269d-46e3-8a91-24c8646d1c75","Type":"ContainerStarted","Data":"4a026481cbb4f1521571d3ccaddde62af575ca56c76ba546208570b5cc6e8417"} Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.100630 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5bf5fbf9fd-qjhcn" event={"ID":"e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac","Type":"ContainerStarted","Data":"71fcec34124e4eb840f1f9ef7776afd8161ef32ae7762c975d26dae1c73a9fb9"} Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.100655 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5bf5fbf9fd-qjhcn" event={"ID":"e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac","Type":"ContainerStarted","Data":"252bf3fc6e090dd737d68dee68f091e98d546e7e23c8a1e06466c291b7d83051"} Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.102954 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2qddb" event={"ID":"1cf00e7d-d396-4719-b077-bd14781d8836","Type":"ContainerDied","Data":"134274b6e147d54c79e9160d421c2a97a68b77944b8dc07b518ecdd7ec157280"} Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.102979 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="134274b6e147d54c79e9160d421c2a97a68b77944b8dc07b518ecdd7ec157280" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.103088 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2qddb" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.104735 4990 generic.go:334] "Generic (PLEG): container finished" podID="63e835fa-b411-4a26-b1ca-b1ab627e8269" containerID="702b288001171a3371b151cf966165c5682ebe8844481cd34d4ea46919856c2d" exitCode=0 Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.104803 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-xs8gq" event={"ID":"63e835fa-b411-4a26-b1ca-b1ab627e8269","Type":"ContainerDied","Data":"702b288001171a3371b151cf966165c5682ebe8844481cd34d4ea46919856c2d"} Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.104829 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-xs8gq" event={"ID":"63e835fa-b411-4a26-b1ca-b1ab627e8269","Type":"ContainerStarted","Data":"9a695d9bc7bb1fa0a9786d52653074b6e6359c41555ef4950c5d791be713a05e"} Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.109727 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-67b9d4ffcb-hc2dk" event={"ID":"fecef393-81c1-4d16-af9e-3d777782dd2f","Type":"ContainerStarted","Data":"a9b42b0996a458efb5b9995ea1b4fbefc0b33a04d03b9a59af1c896dd0e2fc9c"} Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.294733 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 01:32:42 crc kubenswrapper[4990]: E1205 01:32:42.295316 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cf00e7d-d396-4719-b077-bd14781d8836" containerName="cinder-db-sync" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.295333 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cf00e7d-d396-4719-b077-bd14781d8836" containerName="cinder-db-sync" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.295527 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cf00e7d-d396-4719-b077-bd14781d8836" containerName="cinder-db-sync" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.296373 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.303109 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.303274 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.303346 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.308242 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-qv77m" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.311258 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.328352 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-xs8gq"] Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.404045 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-9mtfg"] Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.406197 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-9mtfg" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.444010 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-9mtfg"] Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.492014 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.493379 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.495683 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.500584 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65dd0b37-694b-4420-9b45-29310b975348-scripts\") pod \"cinder-scheduler-0\" (UID: \"65dd0b37-694b-4420-9b45-29310b975348\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.500654 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65dd0b37-694b-4420-9b45-29310b975348-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"65dd0b37-694b-4420-9b45-29310b975348\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.500678 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlkkw\" (UniqueName: \"kubernetes.io/projected/65dd0b37-694b-4420-9b45-29310b975348-kube-api-access-nlkkw\") pod \"cinder-scheduler-0\" (UID: \"65dd0b37-694b-4420-9b45-29310b975348\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.500701 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65dd0b37-694b-4420-9b45-29310b975348-config-data\") pod \"cinder-scheduler-0\" (UID: \"65dd0b37-694b-4420-9b45-29310b975348\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.500715 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.500797 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/65dd0b37-694b-4420-9b45-29310b975348-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"65dd0b37-694b-4420-9b45-29310b975348\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.500860 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65dd0b37-694b-4420-9b45-29310b975348-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"65dd0b37-694b-4420-9b45-29310b975348\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.603501 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/65dd0b37-694b-4420-9b45-29310b975348-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"65dd0b37-694b-4420-9b45-29310b975348\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.603550 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65dd0b37-694b-4420-9b45-29310b975348-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"65dd0b37-694b-4420-9b45-29310b975348\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.603626 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-9mtfg\" (UID: \"80cbc6ec-8ce0-4957-9968-f7ce049b66d4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9mtfg" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.603635 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/65dd0b37-694b-4420-9b45-29310b975348-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"65dd0b37-694b-4420-9b45-29310b975348\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.603647 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfsvp\" (UniqueName: \"kubernetes.io/projected/82af9761-818a-4241-86f1-9e4cc6740a66-kube-api-access-bfsvp\") pod \"cinder-api-0\" (UID: \"82af9761-818a-4241-86f1-9e4cc6740a66\") " pod="openstack/cinder-api-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.603855 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-9mtfg\" (UID: \"80cbc6ec-8ce0-4957-9968-f7ce049b66d4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9mtfg" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.603910 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82af9761-818a-4241-86f1-9e4cc6740a66-config-data-custom\") pod \"cinder-api-0\" (UID: \"82af9761-818a-4241-86f1-9e4cc6740a66\") " pod="openstack/cinder-api-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.603969 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx22v\" (UniqueName: \"kubernetes.io/projected/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-kube-api-access-jx22v\") pod \"dnsmasq-dns-5c9776ccc5-9mtfg\" (UID: \"80cbc6ec-8ce0-4957-9968-f7ce049b66d4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9mtfg" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.604018 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82af9761-818a-4241-86f1-9e4cc6740a66-config-data\") pod \"cinder-api-0\" (UID: \"82af9761-818a-4241-86f1-9e4cc6740a66\") " pod="openstack/cinder-api-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.604187 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82af9761-818a-4241-86f1-9e4cc6740a66-etc-machine-id\") pod \"cinder-api-0\" (UID: \"82af9761-818a-4241-86f1-9e4cc6740a66\") " pod="openstack/cinder-api-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.604236 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82af9761-818a-4241-86f1-9e4cc6740a66-logs\") pod \"cinder-api-0\" (UID: \"82af9761-818a-4241-86f1-9e4cc6740a66\") " pod="openstack/cinder-api-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.604329 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-9mtfg\" (UID: \"80cbc6ec-8ce0-4957-9968-f7ce049b66d4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9mtfg" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.604441 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-config\") pod \"dnsmasq-dns-5c9776ccc5-9mtfg\" (UID: \"80cbc6ec-8ce0-4957-9968-f7ce049b66d4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9mtfg" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.604494 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82af9761-818a-4241-86f1-9e4cc6740a66-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"82af9761-818a-4241-86f1-9e4cc6740a66\") " pod="openstack/cinder-api-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.604518 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65dd0b37-694b-4420-9b45-29310b975348-scripts\") pod \"cinder-scheduler-0\" (UID: \"65dd0b37-694b-4420-9b45-29310b975348\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.604564 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-9mtfg\" (UID: \"80cbc6ec-8ce0-4957-9968-f7ce049b66d4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9mtfg" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.604637 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65dd0b37-694b-4420-9b45-29310b975348-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"65dd0b37-694b-4420-9b45-29310b975348\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.604666 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlkkw\" (UniqueName: \"kubernetes.io/projected/65dd0b37-694b-4420-9b45-29310b975348-kube-api-access-nlkkw\") pod \"cinder-scheduler-0\" (UID: \"65dd0b37-694b-4420-9b45-29310b975348\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.604692 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65dd0b37-694b-4420-9b45-29310b975348-config-data\") pod \"cinder-scheduler-0\" (UID: \"65dd0b37-694b-4420-9b45-29310b975348\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.604712 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82af9761-818a-4241-86f1-9e4cc6740a66-scripts\") pod \"cinder-api-0\" (UID: \"82af9761-818a-4241-86f1-9e4cc6740a66\") " pod="openstack/cinder-api-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.609670 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65dd0b37-694b-4420-9b45-29310b975348-scripts\") pod \"cinder-scheduler-0\" (UID: \"65dd0b37-694b-4420-9b45-29310b975348\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.610042 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65dd0b37-694b-4420-9b45-29310b975348-config-data\") pod \"cinder-scheduler-0\" (UID: \"65dd0b37-694b-4420-9b45-29310b975348\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.622085 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65dd0b37-694b-4420-9b45-29310b975348-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"65dd0b37-694b-4420-9b45-29310b975348\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.622848 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65dd0b37-694b-4420-9b45-29310b975348-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"65dd0b37-694b-4420-9b45-29310b975348\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.623015 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlkkw\" (UniqueName: \"kubernetes.io/projected/65dd0b37-694b-4420-9b45-29310b975348-kube-api-access-nlkkw\") pod \"cinder-scheduler-0\" (UID: \"65dd0b37-694b-4420-9b45-29310b975348\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.708128 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82af9761-818a-4241-86f1-9e4cc6740a66-config-data-custom\") pod \"cinder-api-0\" (UID: \"82af9761-818a-4241-86f1-9e4cc6740a66\") " pod="openstack/cinder-api-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.709096 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx22v\" (UniqueName: \"kubernetes.io/projected/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-kube-api-access-jx22v\") pod \"dnsmasq-dns-5c9776ccc5-9mtfg\" (UID: \"80cbc6ec-8ce0-4957-9968-f7ce049b66d4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9mtfg" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.709116 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82af9761-818a-4241-86f1-9e4cc6740a66-config-data\") pod \"cinder-api-0\" (UID: \"82af9761-818a-4241-86f1-9e4cc6740a66\") " pod="openstack/cinder-api-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.709158 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82af9761-818a-4241-86f1-9e4cc6740a66-etc-machine-id\") pod \"cinder-api-0\" (UID: \"82af9761-818a-4241-86f1-9e4cc6740a66\") " pod="openstack/cinder-api-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.709178 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82af9761-818a-4241-86f1-9e4cc6740a66-logs\") pod \"cinder-api-0\" (UID: \"82af9761-818a-4241-86f1-9e4cc6740a66\") " pod="openstack/cinder-api-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.709201 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-9mtfg\" (UID: \"80cbc6ec-8ce0-4957-9968-f7ce049b66d4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9mtfg" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.709253 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-config\") pod \"dnsmasq-dns-5c9776ccc5-9mtfg\" (UID: \"80cbc6ec-8ce0-4957-9968-f7ce049b66d4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9mtfg" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.709303 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82af9761-818a-4241-86f1-9e4cc6740a66-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"82af9761-818a-4241-86f1-9e4cc6740a66\") " pod="openstack/cinder-api-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.709352 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-9mtfg\" (UID: \"80cbc6ec-8ce0-4957-9968-f7ce049b66d4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9mtfg" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.709386 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82af9761-818a-4241-86f1-9e4cc6740a66-scripts\") pod \"cinder-api-0\" (UID: \"82af9761-818a-4241-86f1-9e4cc6740a66\") " pod="openstack/cinder-api-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.709434 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-9mtfg\" (UID: \"80cbc6ec-8ce0-4957-9968-f7ce049b66d4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9mtfg" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.709449 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfsvp\" (UniqueName: \"kubernetes.io/projected/82af9761-818a-4241-86f1-9e4cc6740a66-kube-api-access-bfsvp\") pod \"cinder-api-0\" (UID: \"82af9761-818a-4241-86f1-9e4cc6740a66\") " pod="openstack/cinder-api-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.709468 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-9mtfg\" (UID: \"80cbc6ec-8ce0-4957-9968-f7ce049b66d4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9mtfg" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.710214 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-9mtfg\" (UID: \"80cbc6ec-8ce0-4957-9968-f7ce049b66d4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9mtfg" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.712525 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82af9761-818a-4241-86f1-9e4cc6740a66-etc-machine-id\") pod \"cinder-api-0\" (UID: \"82af9761-818a-4241-86f1-9e4cc6740a66\") " pod="openstack/cinder-api-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.712995 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82af9761-818a-4241-86f1-9e4cc6740a66-logs\") pod \"cinder-api-0\" (UID: \"82af9761-818a-4241-86f1-9e4cc6740a66\") " pod="openstack/cinder-api-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.713903 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-9mtfg\" (UID: \"80cbc6ec-8ce0-4957-9968-f7ce049b66d4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9mtfg" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.714291 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82af9761-818a-4241-86f1-9e4cc6740a66-config-data-custom\") pod \"cinder-api-0\" (UID: \"82af9761-818a-4241-86f1-9e4cc6740a66\") " pod="openstack/cinder-api-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.714514 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-config\") pod \"dnsmasq-dns-5c9776ccc5-9mtfg\" (UID: \"80cbc6ec-8ce0-4957-9968-f7ce049b66d4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9mtfg" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.714814 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-9mtfg\" (UID: \"80cbc6ec-8ce0-4957-9968-f7ce049b66d4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9mtfg" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.715260 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-9mtfg\" (UID: \"80cbc6ec-8ce0-4957-9968-f7ce049b66d4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9mtfg" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.715898 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82af9761-818a-4241-86f1-9e4cc6740a66-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"82af9761-818a-4241-86f1-9e4cc6740a66\") " pod="openstack/cinder-api-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.718829 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82af9761-818a-4241-86f1-9e4cc6740a66-scripts\") pod \"cinder-api-0\" (UID: \"82af9761-818a-4241-86f1-9e4cc6740a66\") " pod="openstack/cinder-api-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.720308 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82af9761-818a-4241-86f1-9e4cc6740a66-config-data\") pod \"cinder-api-0\" (UID: \"82af9761-818a-4241-86f1-9e4cc6740a66\") " pod="openstack/cinder-api-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.731197 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx22v\" (UniqueName: \"kubernetes.io/projected/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-kube-api-access-jx22v\") pod \"dnsmasq-dns-5c9776ccc5-9mtfg\" (UID: \"80cbc6ec-8ce0-4957-9968-f7ce049b66d4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9mtfg" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.749897 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfsvp\" (UniqueName: \"kubernetes.io/projected/82af9761-818a-4241-86f1-9e4cc6740a66-kube-api-access-bfsvp\") pod \"cinder-api-0\" (UID: \"82af9761-818a-4241-86f1-9e4cc6740a66\") " pod="openstack/cinder-api-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.767914 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-9mtfg" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.884238 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 01:32:42 crc kubenswrapper[4990]: I1205 01:32:42.922659 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 01:32:43 crc kubenswrapper[4990]: I1205 01:32:43.125849 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5bf5fbf9fd-qjhcn" event={"ID":"e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac","Type":"ContainerStarted","Data":"a1ae909933d5809e281a2c9cee1cc72d57196a6bcc78dadf82efb7dc38331d3d"} Dec 05 01:32:43 crc kubenswrapper[4990]: I1205 01:32:43.126694 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5bf5fbf9fd-qjhcn" Dec 05 01:32:43 crc kubenswrapper[4990]: I1205 01:32:43.126777 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5bf5fbf9fd-qjhcn" Dec 05 01:32:43 crc kubenswrapper[4990]: I1205 01:32:43.153030 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5bf5fbf9fd-qjhcn" podStartSLOduration=3.153009935 podStartE2EDuration="3.153009935s" podCreationTimestamp="2025-12-05 01:32:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:32:43.145311716 +0000 UTC m=+1461.521527077" watchObservedRunningTime="2025-12-05 01:32:43.153009935 +0000 UTC m=+1461.529225296" Dec 05 01:32:43 crc kubenswrapper[4990]: E1205 01:32:43.502806 4990 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Dec 05 01:32:43 crc kubenswrapper[4990]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/63e835fa-b411-4a26-b1ca-b1ab627e8269/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Dec 05 01:32:43 crc kubenswrapper[4990]: > podSandboxID="9a695d9bc7bb1fa0a9786d52653074b6e6359c41555ef4950c5d791be713a05e" Dec 05 01:32:43 crc kubenswrapper[4990]: E1205 01:32:43.502976 4990 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 05 01:32:43 crc kubenswrapper[4990]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7ch57ch5c5hcch589hf7h577h659h96h5c8h5b4h55fhbbh667h565h5bchcbh58dh7dh5bch586h56ch574h598h67dh5c8h56dh8bh574h564hbch7q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-swift-storage-0,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-swift-storage-0,SubPath:dns-swift-storage-0,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2npk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-85ff748b95-xs8gq_openstack(63e835fa-b411-4a26-b1ca-b1ab627e8269): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/63e835fa-b411-4a26-b1ca-b1ab627e8269/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Dec 05 01:32:43 crc kubenswrapper[4990]: > logger="UnhandledError" Dec 05 01:32:43 crc kubenswrapper[4990]: E1205 01:32:43.504666 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/63e835fa-b411-4a26-b1ca-b1ab627e8269/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-85ff748b95-xs8gq" podUID="63e835fa-b411-4a26-b1ca-b1ab627e8269" Dec 05 01:32:44 crc kubenswrapper[4990]: I1205 01:32:44.040999 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 01:32:44 crc kubenswrapper[4990]: I1205 01:32:44.087024 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 05 01:32:44 crc kubenswrapper[4990]: I1205 01:32:44.128092 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-9mtfg"] Dec 05 01:32:44 crc kubenswrapper[4990]: I1205 01:32:44.139703 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"82af9761-818a-4241-86f1-9e4cc6740a66","Type":"ContainerStarted","Data":"d66c56e0a061c7559143a66c0a66ff10d53a47e7561e58588112346dde5b3c1c"} Dec 05 01:32:44 crc kubenswrapper[4990]: I1205 01:32:44.145118 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2cc540f0-e077-4687-b2e4-5a0c268ce4a6","Type":"ContainerStarted","Data":"9d9ea3a76edd8ead779822b7704b7f9cabbac47b1618f99c4eff2d376685b24e"} Dec 05 01:32:44 crc kubenswrapper[4990]: I1205 01:32:44.146907 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-557cdcfdf5-b7n8x" event={"ID":"ba876d22-269d-46e3-8a91-24c8646d1c75","Type":"ContainerStarted","Data":"fb190cc3a575d8d3e5ae585358a9c8f90d298cdbc882ba5af1329dfb10d6b5c0"} Dec 05 01:32:44 crc kubenswrapper[4990]: I1205 01:32:44.147884 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"65dd0b37-694b-4420-9b45-29310b975348","Type":"ContainerStarted","Data":"b44ca91e900e77523a8f24cc1aa6d88b18faecb1b4d5ef6d680836af742492a3"} Dec 05 01:32:44 crc kubenswrapper[4990]: I1205 01:32:44.161091 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-67b9d4ffcb-hc2dk" event={"ID":"fecef393-81c1-4d16-af9e-3d777782dd2f","Type":"ContainerStarted","Data":"f737002671cee197ff06fe034d7cff773e243d6fb39490186391618718dde0ca"} Dec 05 01:32:44 crc kubenswrapper[4990]: I1205 01:32:44.637952 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-xs8gq" Dec 05 01:32:44 crc kubenswrapper[4990]: I1205 01:32:44.657773 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63e835fa-b411-4a26-b1ca-b1ab627e8269-ovsdbserver-sb\") pod \"63e835fa-b411-4a26-b1ca-b1ab627e8269\" (UID: \"63e835fa-b411-4a26-b1ca-b1ab627e8269\") " Dec 05 01:32:44 crc kubenswrapper[4990]: I1205 01:32:44.657813 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/63e835fa-b411-4a26-b1ca-b1ab627e8269-dns-swift-storage-0\") pod \"63e835fa-b411-4a26-b1ca-b1ab627e8269\" (UID: \"63e835fa-b411-4a26-b1ca-b1ab627e8269\") " Dec 05 01:32:44 crc kubenswrapper[4990]: I1205 01:32:44.657871 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63e835fa-b411-4a26-b1ca-b1ab627e8269-ovsdbserver-nb\") pod \"63e835fa-b411-4a26-b1ca-b1ab627e8269\" (UID: \"63e835fa-b411-4a26-b1ca-b1ab627e8269\") " Dec 05 01:32:44 crc kubenswrapper[4990]: I1205 01:32:44.657894 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63e835fa-b411-4a26-b1ca-b1ab627e8269-config\") pod \"63e835fa-b411-4a26-b1ca-b1ab627e8269\" (UID: \"63e835fa-b411-4a26-b1ca-b1ab627e8269\") " Dec 05 01:32:44 crc kubenswrapper[4990]: I1205 01:32:44.657912 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63e835fa-b411-4a26-b1ca-b1ab627e8269-dns-svc\") pod \"63e835fa-b411-4a26-b1ca-b1ab627e8269\" (UID: \"63e835fa-b411-4a26-b1ca-b1ab627e8269\") " Dec 05 01:32:44 crc kubenswrapper[4990]: I1205 01:32:44.657987 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2npk4\" (UniqueName: \"kubernetes.io/projected/63e835fa-b411-4a26-b1ca-b1ab627e8269-kube-api-access-2npk4\") pod \"63e835fa-b411-4a26-b1ca-b1ab627e8269\" (UID: \"63e835fa-b411-4a26-b1ca-b1ab627e8269\") " Dec 05 01:32:44 crc kubenswrapper[4990]: I1205 01:32:44.669248 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63e835fa-b411-4a26-b1ca-b1ab627e8269-kube-api-access-2npk4" (OuterVolumeSpecName: "kube-api-access-2npk4") pod "63e835fa-b411-4a26-b1ca-b1ab627e8269" (UID: "63e835fa-b411-4a26-b1ca-b1ab627e8269"). InnerVolumeSpecName "kube-api-access-2npk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:32:44 crc kubenswrapper[4990]: I1205 01:32:44.732442 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63e835fa-b411-4a26-b1ca-b1ab627e8269-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "63e835fa-b411-4a26-b1ca-b1ab627e8269" (UID: "63e835fa-b411-4a26-b1ca-b1ab627e8269"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:32:44 crc kubenswrapper[4990]: I1205 01:32:44.739647 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63e835fa-b411-4a26-b1ca-b1ab627e8269-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "63e835fa-b411-4a26-b1ca-b1ab627e8269" (UID: "63e835fa-b411-4a26-b1ca-b1ab627e8269"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:32:44 crc kubenswrapper[4990]: I1205 01:32:44.740601 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63e835fa-b411-4a26-b1ca-b1ab627e8269-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "63e835fa-b411-4a26-b1ca-b1ab627e8269" (UID: "63e835fa-b411-4a26-b1ca-b1ab627e8269"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:32:44 crc kubenswrapper[4990]: I1205 01:32:44.741309 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63e835fa-b411-4a26-b1ca-b1ab627e8269-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "63e835fa-b411-4a26-b1ca-b1ab627e8269" (UID: "63e835fa-b411-4a26-b1ca-b1ab627e8269"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:32:44 crc kubenswrapper[4990]: I1205 01:32:44.759090 4990 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63e835fa-b411-4a26-b1ca-b1ab627e8269-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:44 crc kubenswrapper[4990]: I1205 01:32:44.759120 4990 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63e835fa-b411-4a26-b1ca-b1ab627e8269-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:44 crc kubenswrapper[4990]: I1205 01:32:44.759132 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2npk4\" (UniqueName: \"kubernetes.io/projected/63e835fa-b411-4a26-b1ca-b1ab627e8269-kube-api-access-2npk4\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:44 crc kubenswrapper[4990]: I1205 01:32:44.759141 4990 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63e835fa-b411-4a26-b1ca-b1ab627e8269-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:44 crc kubenswrapper[4990]: I1205 01:32:44.759152 4990 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/63e835fa-b411-4a26-b1ca-b1ab627e8269-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:44 crc kubenswrapper[4990]: I1205 01:32:44.802802 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63e835fa-b411-4a26-b1ca-b1ab627e8269-config" (OuterVolumeSpecName: "config") pod "63e835fa-b411-4a26-b1ca-b1ab627e8269" (UID: "63e835fa-b411-4a26-b1ca-b1ab627e8269"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:32:44 crc kubenswrapper[4990]: I1205 01:32:44.859914 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63e835fa-b411-4a26-b1ca-b1ab627e8269-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:45 crc kubenswrapper[4990]: I1205 01:32:45.174607 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-557cdcfdf5-b7n8x" event={"ID":"ba876d22-269d-46e3-8a91-24c8646d1c75","Type":"ContainerStarted","Data":"acbdcf83ea752767a3b017fe9de23c7e771233760249604ee1ef047acd8ec3f1"} Dec 05 01:32:45 crc kubenswrapper[4990]: I1205 01:32:45.180152 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-xs8gq" event={"ID":"63e835fa-b411-4a26-b1ca-b1ab627e8269","Type":"ContainerDied","Data":"9a695d9bc7bb1fa0a9786d52653074b6e6359c41555ef4950c5d791be713a05e"} Dec 05 01:32:45 crc kubenswrapper[4990]: I1205 01:32:45.180192 4990 scope.go:117] "RemoveContainer" containerID="702b288001171a3371b151cf966165c5682ebe8844481cd34d4ea46919856c2d" Dec 05 01:32:45 crc kubenswrapper[4990]: I1205 01:32:45.180200 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-xs8gq" Dec 05 01:32:45 crc kubenswrapper[4990]: I1205 01:32:45.183178 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-67b9d4ffcb-hc2dk" event={"ID":"fecef393-81c1-4d16-af9e-3d777782dd2f","Type":"ContainerStarted","Data":"861c7f58bd75979689aba5728fe607c3673e66e27dce3331238271fb617060bd"} Dec 05 01:32:45 crc kubenswrapper[4990]: I1205 01:32:45.188767 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"82af9761-818a-4241-86f1-9e4cc6740a66","Type":"ContainerStarted","Data":"e080fb4a40c2fcf2de38ef6b047b38813b3150462f579ea4d7383126101ceff3"} Dec 05 01:32:45 crc kubenswrapper[4990]: I1205 01:32:45.199280 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-9mtfg" event={"ID":"80cbc6ec-8ce0-4957-9968-f7ce049b66d4","Type":"ContainerDied","Data":"295bbf28419275cdbad11b47870480a8db3beb4d12c9e28b0663610efd73d873"} Dec 05 01:32:45 crc kubenswrapper[4990]: I1205 01:32:45.199153 4990 generic.go:334] "Generic (PLEG): container finished" podID="80cbc6ec-8ce0-4957-9968-f7ce049b66d4" containerID="295bbf28419275cdbad11b47870480a8db3beb4d12c9e28b0663610efd73d873" exitCode=0 Dec 05 01:32:45 crc kubenswrapper[4990]: I1205 01:32:45.200387 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-9mtfg" event={"ID":"80cbc6ec-8ce0-4957-9968-f7ce049b66d4","Type":"ContainerStarted","Data":"382032ed03bf78ded0cb437fda88a1dec44fbcea3b7b176529f8ee14ff3e137f"} Dec 05 01:32:45 crc kubenswrapper[4990]: I1205 01:32:45.213065 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-557cdcfdf5-b7n8x" podStartSLOduration=2.845960811 podStartE2EDuration="5.213047457s" podCreationTimestamp="2025-12-05 01:32:40 +0000 UTC" firstStartedPulling="2025-12-05 01:32:41.224683659 +0000 UTC m=+1459.600899020" lastFinishedPulling="2025-12-05 01:32:43.591770305 +0000 UTC m=+1461.967985666" observedRunningTime="2025-12-05 01:32:45.207395226 +0000 UTC m=+1463.583610587" watchObservedRunningTime="2025-12-05 01:32:45.213047457 +0000 UTC m=+1463.589262818" Dec 05 01:32:45 crc kubenswrapper[4990]: I1205 01:32:45.246401 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-67b9d4ffcb-hc2dk" podStartSLOduration=3.144441501 podStartE2EDuration="5.246384543s" podCreationTimestamp="2025-12-05 01:32:40 +0000 UTC" firstStartedPulling="2025-12-05 01:32:41.498626012 +0000 UTC m=+1459.874841373" lastFinishedPulling="2025-12-05 01:32:43.600569054 +0000 UTC m=+1461.976784415" observedRunningTime="2025-12-05 01:32:45.238307813 +0000 UTC m=+1463.614523174" watchObservedRunningTime="2025-12-05 01:32:45.246384543 +0000 UTC m=+1463.622599904" Dec 05 01:32:45 crc kubenswrapper[4990]: I1205 01:32:45.249342 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2cc540f0-e077-4687-b2e4-5a0c268ce4a6","Type":"ContainerStarted","Data":"4a6fc1996142df0e29ec6b56805c1ef0c2f2bfd30bfecb79b43c48a950a361b3"} Dec 05 01:32:45 crc kubenswrapper[4990]: I1205 01:32:45.335606 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-xs8gq"] Dec 05 01:32:45 crc kubenswrapper[4990]: I1205 01:32:45.384750 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-xs8gq"] Dec 05 01:32:45 crc kubenswrapper[4990]: I1205 01:32:45.444382 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 05 01:32:45 crc kubenswrapper[4990]: I1205 01:32:45.954191 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63e835fa-b411-4a26-b1ca-b1ab627e8269" path="/var/lib/kubelet/pods/63e835fa-b411-4a26-b1ca-b1ab627e8269/volumes" Dec 05 01:32:46 crc kubenswrapper[4990]: I1205 01:32:46.278079 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"82af9761-818a-4241-86f1-9e4cc6740a66","Type":"ContainerStarted","Data":"5081d4c04c1d8db6e792405ba90597aa8ed82df1e64751565345856482ea44ba"} Dec 05 01:32:46 crc kubenswrapper[4990]: I1205 01:32:46.278240 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="82af9761-818a-4241-86f1-9e4cc6740a66" containerName="cinder-api-log" containerID="cri-o://e080fb4a40c2fcf2de38ef6b047b38813b3150462f579ea4d7383126101ceff3" gracePeriod=30 Dec 05 01:32:46 crc kubenswrapper[4990]: I1205 01:32:46.278292 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="82af9761-818a-4241-86f1-9e4cc6740a66" containerName="cinder-api" containerID="cri-o://5081d4c04c1d8db6e792405ba90597aa8ed82df1e64751565345856482ea44ba" gracePeriod=30 Dec 05 01:32:46 crc kubenswrapper[4990]: I1205 01:32:46.278598 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 05 01:32:46 crc kubenswrapper[4990]: I1205 01:32:46.283103 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-9mtfg" event={"ID":"80cbc6ec-8ce0-4957-9968-f7ce049b66d4","Type":"ContainerStarted","Data":"fa49af118ebd203d1ad30570dcde553d0db1eaf7b3527a9f42ebfc6264fac626"} Dec 05 01:32:46 crc kubenswrapper[4990]: I1205 01:32:46.283245 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-9mtfg" Dec 05 01:32:46 crc kubenswrapper[4990]: I1205 01:32:46.286478 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2cc540f0-e077-4687-b2e4-5a0c268ce4a6","Type":"ContainerStarted","Data":"e7f54b1b5c504a300b7c0368d44657fadf84bbbfddf65d7871f2276a5ebed7c0"} Dec 05 01:32:46 crc kubenswrapper[4990]: I1205 01:32:46.291621 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"65dd0b37-694b-4420-9b45-29310b975348","Type":"ContainerStarted","Data":"53b0bfd7b2462543f87d772d7dfa037728fcbd1160cbcca19896f5064f9a4067"} Dec 05 01:32:46 crc kubenswrapper[4990]: I1205 01:32:46.312188 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.312163854 podStartE2EDuration="4.312163854s" podCreationTimestamp="2025-12-05 01:32:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:32:46.293730201 +0000 UTC m=+1464.669945562" watchObservedRunningTime="2025-12-05 01:32:46.312163854 +0000 UTC m=+1464.688379215" Dec 05 01:32:46 crc kubenswrapper[4990]: I1205 01:32:46.313282 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-9mtfg" podStartSLOduration=4.313274415 podStartE2EDuration="4.313274415s" podCreationTimestamp="2025-12-05 01:32:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:32:46.312843483 +0000 UTC m=+1464.689058854" watchObservedRunningTime="2025-12-05 01:32:46.313274415 +0000 UTC m=+1464.689489776" Dec 05 01:32:46 crc kubenswrapper[4990]: I1205 01:32:46.950514 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.023052 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfsvp\" (UniqueName: \"kubernetes.io/projected/82af9761-818a-4241-86f1-9e4cc6740a66-kube-api-access-bfsvp\") pod \"82af9761-818a-4241-86f1-9e4cc6740a66\" (UID: \"82af9761-818a-4241-86f1-9e4cc6740a66\") " Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.023416 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82af9761-818a-4241-86f1-9e4cc6740a66-etc-machine-id\") pod \"82af9761-818a-4241-86f1-9e4cc6740a66\" (UID: \"82af9761-818a-4241-86f1-9e4cc6740a66\") " Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.023461 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82af9761-818a-4241-86f1-9e4cc6740a66-config-data-custom\") pod \"82af9761-818a-4241-86f1-9e4cc6740a66\" (UID: \"82af9761-818a-4241-86f1-9e4cc6740a66\") " Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.023612 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82af9761-818a-4241-86f1-9e4cc6740a66-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "82af9761-818a-4241-86f1-9e4cc6740a66" (UID: "82af9761-818a-4241-86f1-9e4cc6740a66"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.023642 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82af9761-818a-4241-86f1-9e4cc6740a66-config-data\") pod \"82af9761-818a-4241-86f1-9e4cc6740a66\" (UID: \"82af9761-818a-4241-86f1-9e4cc6740a66\") " Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.023682 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82af9761-818a-4241-86f1-9e4cc6740a66-combined-ca-bundle\") pod \"82af9761-818a-4241-86f1-9e4cc6740a66\" (UID: \"82af9761-818a-4241-86f1-9e4cc6740a66\") " Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.023700 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82af9761-818a-4241-86f1-9e4cc6740a66-scripts\") pod \"82af9761-818a-4241-86f1-9e4cc6740a66\" (UID: \"82af9761-818a-4241-86f1-9e4cc6740a66\") " Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.023720 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82af9761-818a-4241-86f1-9e4cc6740a66-logs\") pod \"82af9761-818a-4241-86f1-9e4cc6740a66\" (UID: \"82af9761-818a-4241-86f1-9e4cc6740a66\") " Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.024587 4990 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82af9761-818a-4241-86f1-9e4cc6740a66-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.034704 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82af9761-818a-4241-86f1-9e4cc6740a66-scripts" (OuterVolumeSpecName: "scripts") pod "82af9761-818a-4241-86f1-9e4cc6740a66" (UID: "82af9761-818a-4241-86f1-9e4cc6740a66"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.034749 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82af9761-818a-4241-86f1-9e4cc6740a66-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "82af9761-818a-4241-86f1-9e4cc6740a66" (UID: "82af9761-818a-4241-86f1-9e4cc6740a66"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.042679 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82af9761-818a-4241-86f1-9e4cc6740a66-logs" (OuterVolumeSpecName: "logs") pod "82af9761-818a-4241-86f1-9e4cc6740a66" (UID: "82af9761-818a-4241-86f1-9e4cc6740a66"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.063757 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82af9761-818a-4241-86f1-9e4cc6740a66-kube-api-access-bfsvp" (OuterVolumeSpecName: "kube-api-access-bfsvp") pod "82af9761-818a-4241-86f1-9e4cc6740a66" (UID: "82af9761-818a-4241-86f1-9e4cc6740a66"). InnerVolumeSpecName "kube-api-access-bfsvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.066423 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-74bb84bc86-8krlf"] Dec 05 01:32:47 crc kubenswrapper[4990]: E1205 01:32:47.066947 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63e835fa-b411-4a26-b1ca-b1ab627e8269" containerName="init" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.066962 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="63e835fa-b411-4a26-b1ca-b1ab627e8269" containerName="init" Dec 05 01:32:47 crc kubenswrapper[4990]: E1205 01:32:47.066981 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82af9761-818a-4241-86f1-9e4cc6740a66" containerName="cinder-api" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.066987 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="82af9761-818a-4241-86f1-9e4cc6740a66" containerName="cinder-api" Dec 05 01:32:47 crc kubenswrapper[4990]: E1205 01:32:47.067000 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82af9761-818a-4241-86f1-9e4cc6740a66" containerName="cinder-api-log" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.067008 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="82af9761-818a-4241-86f1-9e4cc6740a66" containerName="cinder-api-log" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.070658 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="63e835fa-b411-4a26-b1ca-b1ab627e8269" containerName="init" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.070692 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="82af9761-818a-4241-86f1-9e4cc6740a66" containerName="cinder-api-log" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.070722 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="82af9761-818a-4241-86f1-9e4cc6740a66" containerName="cinder-api" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.071721 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74bb84bc86-8krlf" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.081971 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.082184 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.114562 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-74bb84bc86-8krlf"] Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.122201 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82af9761-818a-4241-86f1-9e4cc6740a66-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82af9761-818a-4241-86f1-9e4cc6740a66" (UID: "82af9761-818a-4241-86f1-9e4cc6740a66"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.126921 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4489a490-bacc-498c-b0e3-d6b5cad13d34-config-data\") pod \"barbican-api-74bb84bc86-8krlf\" (UID: \"4489a490-bacc-498c-b0e3-d6b5cad13d34\") " pod="openstack/barbican-api-74bb84bc86-8krlf" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.126975 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4489a490-bacc-498c-b0e3-d6b5cad13d34-public-tls-certs\") pod \"barbican-api-74bb84bc86-8krlf\" (UID: \"4489a490-bacc-498c-b0e3-d6b5cad13d34\") " pod="openstack/barbican-api-74bb84bc86-8krlf" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.127167 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4489a490-bacc-498c-b0e3-d6b5cad13d34-logs\") pod \"barbican-api-74bb84bc86-8krlf\" (UID: \"4489a490-bacc-498c-b0e3-d6b5cad13d34\") " pod="openstack/barbican-api-74bb84bc86-8krlf" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.127210 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4489a490-bacc-498c-b0e3-d6b5cad13d34-config-data-custom\") pod \"barbican-api-74bb84bc86-8krlf\" (UID: \"4489a490-bacc-498c-b0e3-d6b5cad13d34\") " pod="openstack/barbican-api-74bb84bc86-8krlf" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.127248 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwwxq\" (UniqueName: \"kubernetes.io/projected/4489a490-bacc-498c-b0e3-d6b5cad13d34-kube-api-access-vwwxq\") pod \"barbican-api-74bb84bc86-8krlf\" (UID: \"4489a490-bacc-498c-b0e3-d6b5cad13d34\") " pod="openstack/barbican-api-74bb84bc86-8krlf" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.127269 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4489a490-bacc-498c-b0e3-d6b5cad13d34-combined-ca-bundle\") pod \"barbican-api-74bb84bc86-8krlf\" (UID: \"4489a490-bacc-498c-b0e3-d6b5cad13d34\") " pod="openstack/barbican-api-74bb84bc86-8krlf" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.127329 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4489a490-bacc-498c-b0e3-d6b5cad13d34-internal-tls-certs\") pod \"barbican-api-74bb84bc86-8krlf\" (UID: \"4489a490-bacc-498c-b0e3-d6b5cad13d34\") " pod="openstack/barbican-api-74bb84bc86-8krlf" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.127431 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82af9761-818a-4241-86f1-9e4cc6740a66-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.127441 4990 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82af9761-818a-4241-86f1-9e4cc6740a66-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.127450 4990 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82af9761-818a-4241-86f1-9e4cc6740a66-logs\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.127460 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfsvp\" (UniqueName: \"kubernetes.io/projected/82af9761-818a-4241-86f1-9e4cc6740a66-kube-api-access-bfsvp\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.127506 4990 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82af9761-818a-4241-86f1-9e4cc6740a66-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.143742 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82af9761-818a-4241-86f1-9e4cc6740a66-config-data" (OuterVolumeSpecName: "config-data") pod "82af9761-818a-4241-86f1-9e4cc6740a66" (UID: "82af9761-818a-4241-86f1-9e4cc6740a66"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.229438 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4489a490-bacc-498c-b0e3-d6b5cad13d34-config-data-custom\") pod \"barbican-api-74bb84bc86-8krlf\" (UID: \"4489a490-bacc-498c-b0e3-d6b5cad13d34\") " pod="openstack/barbican-api-74bb84bc86-8krlf" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.229495 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwwxq\" (UniqueName: \"kubernetes.io/projected/4489a490-bacc-498c-b0e3-d6b5cad13d34-kube-api-access-vwwxq\") pod \"barbican-api-74bb84bc86-8krlf\" (UID: \"4489a490-bacc-498c-b0e3-d6b5cad13d34\") " pod="openstack/barbican-api-74bb84bc86-8krlf" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.229519 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4489a490-bacc-498c-b0e3-d6b5cad13d34-combined-ca-bundle\") pod \"barbican-api-74bb84bc86-8krlf\" (UID: \"4489a490-bacc-498c-b0e3-d6b5cad13d34\") " pod="openstack/barbican-api-74bb84bc86-8krlf" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.229576 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4489a490-bacc-498c-b0e3-d6b5cad13d34-internal-tls-certs\") pod \"barbican-api-74bb84bc86-8krlf\" (UID: \"4489a490-bacc-498c-b0e3-d6b5cad13d34\") " pod="openstack/barbican-api-74bb84bc86-8krlf" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.229632 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4489a490-bacc-498c-b0e3-d6b5cad13d34-config-data\") pod \"barbican-api-74bb84bc86-8krlf\" (UID: \"4489a490-bacc-498c-b0e3-d6b5cad13d34\") " pod="openstack/barbican-api-74bb84bc86-8krlf" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.229665 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4489a490-bacc-498c-b0e3-d6b5cad13d34-public-tls-certs\") pod \"barbican-api-74bb84bc86-8krlf\" (UID: \"4489a490-bacc-498c-b0e3-d6b5cad13d34\") " pod="openstack/barbican-api-74bb84bc86-8krlf" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.229713 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4489a490-bacc-498c-b0e3-d6b5cad13d34-logs\") pod \"barbican-api-74bb84bc86-8krlf\" (UID: \"4489a490-bacc-498c-b0e3-d6b5cad13d34\") " pod="openstack/barbican-api-74bb84bc86-8krlf" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.229763 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82af9761-818a-4241-86f1-9e4cc6740a66-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.230154 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4489a490-bacc-498c-b0e3-d6b5cad13d34-logs\") pod \"barbican-api-74bb84bc86-8krlf\" (UID: \"4489a490-bacc-498c-b0e3-d6b5cad13d34\") " pod="openstack/barbican-api-74bb84bc86-8krlf" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.234255 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4489a490-bacc-498c-b0e3-d6b5cad13d34-combined-ca-bundle\") pod \"barbican-api-74bb84bc86-8krlf\" (UID: \"4489a490-bacc-498c-b0e3-d6b5cad13d34\") " pod="openstack/barbican-api-74bb84bc86-8krlf" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.234610 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4489a490-bacc-498c-b0e3-d6b5cad13d34-config-data-custom\") pod \"barbican-api-74bb84bc86-8krlf\" (UID: \"4489a490-bacc-498c-b0e3-d6b5cad13d34\") " pod="openstack/barbican-api-74bb84bc86-8krlf" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.235132 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4489a490-bacc-498c-b0e3-d6b5cad13d34-public-tls-certs\") pod \"barbican-api-74bb84bc86-8krlf\" (UID: \"4489a490-bacc-498c-b0e3-d6b5cad13d34\") " pod="openstack/barbican-api-74bb84bc86-8krlf" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.236065 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4489a490-bacc-498c-b0e3-d6b5cad13d34-internal-tls-certs\") pod \"barbican-api-74bb84bc86-8krlf\" (UID: \"4489a490-bacc-498c-b0e3-d6b5cad13d34\") " pod="openstack/barbican-api-74bb84bc86-8krlf" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.236171 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4489a490-bacc-498c-b0e3-d6b5cad13d34-config-data\") pod \"barbican-api-74bb84bc86-8krlf\" (UID: \"4489a490-bacc-498c-b0e3-d6b5cad13d34\") " pod="openstack/barbican-api-74bb84bc86-8krlf" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.247814 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwwxq\" (UniqueName: \"kubernetes.io/projected/4489a490-bacc-498c-b0e3-d6b5cad13d34-kube-api-access-vwwxq\") pod \"barbican-api-74bb84bc86-8krlf\" (UID: \"4489a490-bacc-498c-b0e3-d6b5cad13d34\") " pod="openstack/barbican-api-74bb84bc86-8krlf" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.311259 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"65dd0b37-694b-4420-9b45-29310b975348","Type":"ContainerStarted","Data":"7096252d13b83e4d071d02b3dd7ec8510bf0890112154f967337d7814d1d45f9"} Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.330443 4990 generic.go:334] "Generic (PLEG): container finished" podID="82af9761-818a-4241-86f1-9e4cc6740a66" containerID="5081d4c04c1d8db6e792405ba90597aa8ed82df1e64751565345856482ea44ba" exitCode=0 Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.330566 4990 generic.go:334] "Generic (PLEG): container finished" podID="82af9761-818a-4241-86f1-9e4cc6740a66" containerID="e080fb4a40c2fcf2de38ef6b047b38813b3150462f579ea4d7383126101ceff3" exitCode=143 Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.330614 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"82af9761-818a-4241-86f1-9e4cc6740a66","Type":"ContainerDied","Data":"5081d4c04c1d8db6e792405ba90597aa8ed82df1e64751565345856482ea44ba"} Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.330642 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"82af9761-818a-4241-86f1-9e4cc6740a66","Type":"ContainerDied","Data":"e080fb4a40c2fcf2de38ef6b047b38813b3150462f579ea4d7383126101ceff3"} Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.330653 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"82af9761-818a-4241-86f1-9e4cc6740a66","Type":"ContainerDied","Data":"d66c56e0a061c7559143a66c0a66ff10d53a47e7561e58588112346dde5b3c1c"} Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.330668 4990 scope.go:117] "RemoveContainer" containerID="5081d4c04c1d8db6e792405ba90597aa8ed82df1e64751565345856482ea44ba" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.330776 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.352638 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2cc540f0-e077-4687-b2e4-5a0c268ce4a6","Type":"ContainerStarted","Data":"4a326ed1d2d8e27b9394ae700bdaac30f713fdf0b35da3963333ed0f68b5b7a8"} Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.352956 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.364298 4990 scope.go:117] "RemoveContainer" containerID="e080fb4a40c2fcf2de38ef6b047b38813b3150462f579ea4d7383126101ceff3" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.378226 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.390586101 podStartE2EDuration="6.378211682s" podCreationTimestamp="2025-12-05 01:32:41 +0000 UTC" firstStartedPulling="2025-12-05 01:32:42.060010001 +0000 UTC m=+1460.436225362" lastFinishedPulling="2025-12-05 01:32:47.047635582 +0000 UTC m=+1465.423850943" observedRunningTime="2025-12-05 01:32:47.371288916 +0000 UTC m=+1465.747504267" watchObservedRunningTime="2025-12-05 01:32:47.378211682 +0000 UTC m=+1465.754427043" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.379260 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.395960862 podStartE2EDuration="5.379252812s" podCreationTimestamp="2025-12-05 01:32:42 +0000 UTC" firstStartedPulling="2025-12-05 01:32:44.055899374 +0000 UTC m=+1462.432114735" lastFinishedPulling="2025-12-05 01:32:45.039191324 +0000 UTC m=+1463.415406685" observedRunningTime="2025-12-05 01:32:47.339401191 +0000 UTC m=+1465.715616552" watchObservedRunningTime="2025-12-05 01:32:47.379252812 +0000 UTC m=+1465.755468163" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.398606 4990 scope.go:117] "RemoveContainer" containerID="5081d4c04c1d8db6e792405ba90597aa8ed82df1e64751565345856482ea44ba" Dec 05 01:32:47 crc kubenswrapper[4990]: E1205 01:32:47.400082 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5081d4c04c1d8db6e792405ba90597aa8ed82df1e64751565345856482ea44ba\": container with ID starting with 5081d4c04c1d8db6e792405ba90597aa8ed82df1e64751565345856482ea44ba not found: ID does not exist" containerID="5081d4c04c1d8db6e792405ba90597aa8ed82df1e64751565345856482ea44ba" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.400112 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5081d4c04c1d8db6e792405ba90597aa8ed82df1e64751565345856482ea44ba"} err="failed to get container status \"5081d4c04c1d8db6e792405ba90597aa8ed82df1e64751565345856482ea44ba\": rpc error: code = NotFound desc = could not find container \"5081d4c04c1d8db6e792405ba90597aa8ed82df1e64751565345856482ea44ba\": container with ID starting with 5081d4c04c1d8db6e792405ba90597aa8ed82df1e64751565345856482ea44ba not found: ID does not exist" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.400132 4990 scope.go:117] "RemoveContainer" containerID="e080fb4a40c2fcf2de38ef6b047b38813b3150462f579ea4d7383126101ceff3" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.400180 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 05 01:32:47 crc kubenswrapper[4990]: E1205 01:32:47.400650 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e080fb4a40c2fcf2de38ef6b047b38813b3150462f579ea4d7383126101ceff3\": container with ID starting with e080fb4a40c2fcf2de38ef6b047b38813b3150462f579ea4d7383126101ceff3 not found: ID does not exist" containerID="e080fb4a40c2fcf2de38ef6b047b38813b3150462f579ea4d7383126101ceff3" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.400672 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e080fb4a40c2fcf2de38ef6b047b38813b3150462f579ea4d7383126101ceff3"} err="failed to get container status \"e080fb4a40c2fcf2de38ef6b047b38813b3150462f579ea4d7383126101ceff3\": rpc error: code = NotFound desc = could not find container \"e080fb4a40c2fcf2de38ef6b047b38813b3150462f579ea4d7383126101ceff3\": container with ID starting with e080fb4a40c2fcf2de38ef6b047b38813b3150462f579ea4d7383126101ceff3 not found: ID does not exist" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.400685 4990 scope.go:117] "RemoveContainer" containerID="5081d4c04c1d8db6e792405ba90597aa8ed82df1e64751565345856482ea44ba" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.404719 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5081d4c04c1d8db6e792405ba90597aa8ed82df1e64751565345856482ea44ba"} err="failed to get container status \"5081d4c04c1d8db6e792405ba90597aa8ed82df1e64751565345856482ea44ba\": rpc error: code = NotFound desc = could not find container \"5081d4c04c1d8db6e792405ba90597aa8ed82df1e64751565345856482ea44ba\": container with ID starting with 5081d4c04c1d8db6e792405ba90597aa8ed82df1e64751565345856482ea44ba not found: ID does not exist" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.404763 4990 scope.go:117] "RemoveContainer" containerID="e080fb4a40c2fcf2de38ef6b047b38813b3150462f579ea4d7383126101ceff3" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.405276 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74bb84bc86-8krlf" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.407083 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e080fb4a40c2fcf2de38ef6b047b38813b3150462f579ea4d7383126101ceff3"} err="failed to get container status \"e080fb4a40c2fcf2de38ef6b047b38813b3150462f579ea4d7383126101ceff3\": rpc error: code = NotFound desc = could not find container \"e080fb4a40c2fcf2de38ef6b047b38813b3150462f579ea4d7383126101ceff3\": container with ID starting with e080fb4a40c2fcf2de38ef6b047b38813b3150462f579ea4d7383126101ceff3 not found: ID does not exist" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.413412 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.457565 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.459038 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.461717 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.462049 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.462359 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.470996 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.535148 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10384219-030b-491b-884f-fd761eba4496-logs\") pod \"cinder-api-0\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " pod="openstack/cinder-api-0" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.535412 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-public-tls-certs\") pod \"cinder-api-0\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " pod="openstack/cinder-api-0" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.535457 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-config-data\") pod \"cinder-api-0\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " pod="openstack/cinder-api-0" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.535476 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " pod="openstack/cinder-api-0" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.537704 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " pod="openstack/cinder-api-0" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.537765 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10384219-030b-491b-884f-fd761eba4496-etc-machine-id\") pod \"cinder-api-0\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " pod="openstack/cinder-api-0" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.537820 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-config-data-custom\") pod \"cinder-api-0\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " pod="openstack/cinder-api-0" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.537886 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8v2q\" (UniqueName: \"kubernetes.io/projected/10384219-030b-491b-884f-fd761eba4496-kube-api-access-w8v2q\") pod \"cinder-api-0\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " pod="openstack/cinder-api-0" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.537950 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-scripts\") pod \"cinder-api-0\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " pod="openstack/cinder-api-0" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.639554 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-config-data-custom\") pod \"cinder-api-0\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " pod="openstack/cinder-api-0" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.639627 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8v2q\" (UniqueName: \"kubernetes.io/projected/10384219-030b-491b-884f-fd761eba4496-kube-api-access-w8v2q\") pod \"cinder-api-0\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " pod="openstack/cinder-api-0" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.639674 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-scripts\") pod \"cinder-api-0\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " pod="openstack/cinder-api-0" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.639699 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10384219-030b-491b-884f-fd761eba4496-logs\") pod \"cinder-api-0\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " pod="openstack/cinder-api-0" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.639732 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-public-tls-certs\") pod \"cinder-api-0\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " pod="openstack/cinder-api-0" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.639761 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-config-data\") pod \"cinder-api-0\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " pod="openstack/cinder-api-0" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.639778 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " pod="openstack/cinder-api-0" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.639797 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " pod="openstack/cinder-api-0" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.639830 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10384219-030b-491b-884f-fd761eba4496-etc-machine-id\") pod \"cinder-api-0\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " pod="openstack/cinder-api-0" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.639910 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10384219-030b-491b-884f-fd761eba4496-etc-machine-id\") pod \"cinder-api-0\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " pod="openstack/cinder-api-0" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.641866 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10384219-030b-491b-884f-fd761eba4496-logs\") pod \"cinder-api-0\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " pod="openstack/cinder-api-0" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.646059 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-public-tls-certs\") pod \"cinder-api-0\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " pod="openstack/cinder-api-0" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.647429 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-scripts\") pod \"cinder-api-0\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " pod="openstack/cinder-api-0" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.648831 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-config-data-custom\") pod \"cinder-api-0\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " pod="openstack/cinder-api-0" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.650182 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " pod="openstack/cinder-api-0" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.658354 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-config-data\") pod \"cinder-api-0\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " pod="openstack/cinder-api-0" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.664040 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " pod="openstack/cinder-api-0" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.666970 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8v2q\" (UniqueName: \"kubernetes.io/projected/10384219-030b-491b-884f-fd761eba4496-kube-api-access-w8v2q\") pod \"cinder-api-0\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " pod="openstack/cinder-api-0" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.859464 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.923179 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.975708 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82af9761-818a-4241-86f1-9e4cc6740a66" path="/var/lib/kubelet/pods/82af9761-818a-4241-86f1-9e4cc6740a66/volumes" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.980721 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5bf5fbf9fd-qjhcn" Dec 05 01:32:47 crc kubenswrapper[4990]: I1205 01:32:47.999183 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-74bb84bc86-8krlf"] Dec 05 01:32:48 crc kubenswrapper[4990]: I1205 01:32:48.367323 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74bb84bc86-8krlf" event={"ID":"4489a490-bacc-498c-b0e3-d6b5cad13d34","Type":"ContainerStarted","Data":"57746098ad306cb3039f8ee75ec4203722dcaf04ca8ec10ec4bae99de921040a"} Dec 05 01:32:48 crc kubenswrapper[4990]: I1205 01:32:48.372438 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74bb84bc86-8krlf" event={"ID":"4489a490-bacc-498c-b0e3-d6b5cad13d34","Type":"ContainerStarted","Data":"d7257257bf3dcc51f9892693e78899007d651694cccc8d56e89312f1dcad3bbc"} Dec 05 01:32:48 crc kubenswrapper[4990]: I1205 01:32:48.388771 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 05 01:32:49 crc kubenswrapper[4990]: I1205 01:32:49.383127 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74bb84bc86-8krlf" event={"ID":"4489a490-bacc-498c-b0e3-d6b5cad13d34","Type":"ContainerStarted","Data":"6ddb956e1bc6923210a8635165e707d600786748db6d8f83199e419eecac98d4"} Dec 05 01:32:49 crc kubenswrapper[4990]: I1205 01:32:49.383527 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-74bb84bc86-8krlf" Dec 05 01:32:49 crc kubenswrapper[4990]: I1205 01:32:49.383551 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-74bb84bc86-8krlf" Dec 05 01:32:49 crc kubenswrapper[4990]: I1205 01:32:49.390423 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"10384219-030b-491b-884f-fd761eba4496","Type":"ContainerStarted","Data":"75a01a2a625f0f4818f4355774ffa785f189c55650887e146da7dd75eb006af5"} Dec 05 01:32:49 crc kubenswrapper[4990]: I1205 01:32:49.390472 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"10384219-030b-491b-884f-fd761eba4496","Type":"ContainerStarted","Data":"45c0a23118b8b647776826f93bf7d1d16c5b1174ed4446ecd18b09ed67ea9dd3"} Dec 05 01:32:49 crc kubenswrapper[4990]: I1205 01:32:49.412230 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-74bb84bc86-8krlf" podStartSLOduration=2.412215455 podStartE2EDuration="2.412215455s" podCreationTimestamp="2025-12-05 01:32:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:32:49.404211998 +0000 UTC m=+1467.780427399" watchObservedRunningTime="2025-12-05 01:32:49.412215455 +0000 UTC m=+1467.788430816" Dec 05 01:32:49 crc kubenswrapper[4990]: I1205 01:32:49.468770 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5bf5fbf9fd-qjhcn" Dec 05 01:32:50 crc kubenswrapper[4990]: I1205 01:32:50.403446 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"10384219-030b-491b-884f-fd761eba4496","Type":"ContainerStarted","Data":"90907eab6e43a67e9dd116d95af526faeb9a66ef61e70f7e11881689a35c73d5"} Dec 05 01:32:50 crc kubenswrapper[4990]: I1205 01:32:50.403831 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 05 01:32:51 crc kubenswrapper[4990]: I1205 01:32:51.323293 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-64869d6796-xppnk" Dec 05 01:32:51 crc kubenswrapper[4990]: I1205 01:32:51.344782 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.344759131 podStartE2EDuration="4.344759131s" podCreationTimestamp="2025-12-05 01:32:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:32:50.440944175 +0000 UTC m=+1468.817159566" watchObservedRunningTime="2025-12-05 01:32:51.344759131 +0000 UTC m=+1469.720974492" Dec 05 01:32:51 crc kubenswrapper[4990]: I1205 01:32:51.823656 4990 patch_prober.go:28] interesting pod/machine-config-daemon-zxlh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 01:32:51 crc kubenswrapper[4990]: I1205 01:32:51.823710 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 01:32:51 crc kubenswrapper[4990]: I1205 01:32:51.823750 4990 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" Dec 05 01:32:51 crc kubenswrapper[4990]: I1205 01:32:51.824216 4990 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5555ce4abbfedb686ddef6d7dce409f40c947a09fec383b5821b1209ff394208"} pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 01:32:51 crc kubenswrapper[4990]: I1205 01:32:51.824257 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" containerID="cri-o://5555ce4abbfedb686ddef6d7dce409f40c947a09fec383b5821b1209ff394208" gracePeriod=600 Dec 05 01:32:52 crc kubenswrapper[4990]: I1205 01:32:52.198898 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6c5f858c6d-zxwsh" Dec 05 01:32:52 crc kubenswrapper[4990]: I1205 01:32:52.201322 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6c5f858c6d-zxwsh" Dec 05 01:32:52 crc kubenswrapper[4990]: I1205 01:32:52.425375 4990 generic.go:334] "Generic (PLEG): container finished" podID="b6580a04-67de-48f9-9da2-56cb4377af48" containerID="5555ce4abbfedb686ddef6d7dce409f40c947a09fec383b5821b1209ff394208" exitCode=0 Dec 05 01:32:52 crc kubenswrapper[4990]: I1205 01:32:52.426326 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" event={"ID":"b6580a04-67de-48f9-9da2-56cb4377af48","Type":"ContainerDied","Data":"5555ce4abbfedb686ddef6d7dce409f40c947a09fec383b5821b1209ff394208"} Dec 05 01:32:52 crc kubenswrapper[4990]: I1205 01:32:52.426358 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" event={"ID":"b6580a04-67de-48f9-9da2-56cb4377af48","Type":"ContainerStarted","Data":"2d4ea228591d9ae7abab2d6300a8c28ea985494d462aee9ebadaead9d728a86f"} Dec 05 01:32:52 crc kubenswrapper[4990]: I1205 01:32:52.426381 4990 scope.go:117] "RemoveContainer" containerID="ff6ba92961791b172f695a12e8eb19f33bc6e8ba78d861452310d9615b6fa761" Dec 05 01:32:52 crc kubenswrapper[4990]: I1205 01:32:52.769625 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-9mtfg" Dec 05 01:32:52 crc kubenswrapper[4990]: I1205 01:32:52.829650 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7rhp4"] Dec 05 01:32:52 crc kubenswrapper[4990]: I1205 01:32:52.829895 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-7rhp4" podUID="e9631ba6-27a2-4f46-a578-e3f4998aca10" containerName="dnsmasq-dns" containerID="cri-o://605219d351bd577a6b91bfeb9b9bf74701c5142eed2ff346135484790de72014" gracePeriod=10 Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.157603 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.239194 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.342877 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-7rhp4" Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.373317 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9631ba6-27a2-4f46-a578-e3f4998aca10-ovsdbserver-sb\") pod \"e9631ba6-27a2-4f46-a578-e3f4998aca10\" (UID: \"e9631ba6-27a2-4f46-a578-e3f4998aca10\") " Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.373362 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9631ba6-27a2-4f46-a578-e3f4998aca10-dns-svc\") pod \"e9631ba6-27a2-4f46-a578-e3f4998aca10\" (UID: \"e9631ba6-27a2-4f46-a578-e3f4998aca10\") " Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.373457 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkbdp\" (UniqueName: \"kubernetes.io/projected/e9631ba6-27a2-4f46-a578-e3f4998aca10-kube-api-access-pkbdp\") pod \"e9631ba6-27a2-4f46-a578-e3f4998aca10\" (UID: \"e9631ba6-27a2-4f46-a578-e3f4998aca10\") " Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.373517 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e9631ba6-27a2-4f46-a578-e3f4998aca10-dns-swift-storage-0\") pod \"e9631ba6-27a2-4f46-a578-e3f4998aca10\" (UID: \"e9631ba6-27a2-4f46-a578-e3f4998aca10\") " Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.373587 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9631ba6-27a2-4f46-a578-e3f4998aca10-ovsdbserver-nb\") pod \"e9631ba6-27a2-4f46-a578-e3f4998aca10\" (UID: \"e9631ba6-27a2-4f46-a578-e3f4998aca10\") " Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.373613 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9631ba6-27a2-4f46-a578-e3f4998aca10-config\") pod \"e9631ba6-27a2-4f46-a578-e3f4998aca10\" (UID: \"e9631ba6-27a2-4f46-a578-e3f4998aca10\") " Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.391830 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9631ba6-27a2-4f46-a578-e3f4998aca10-kube-api-access-pkbdp" (OuterVolumeSpecName: "kube-api-access-pkbdp") pod "e9631ba6-27a2-4f46-a578-e3f4998aca10" (UID: "e9631ba6-27a2-4f46-a578-e3f4998aca10"). InnerVolumeSpecName "kube-api-access-pkbdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.476586 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkbdp\" (UniqueName: \"kubernetes.io/projected/e9631ba6-27a2-4f46-a578-e3f4998aca10-kube-api-access-pkbdp\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.500983 4990 generic.go:334] "Generic (PLEG): container finished" podID="e9631ba6-27a2-4f46-a578-e3f4998aca10" containerID="605219d351bd577a6b91bfeb9b9bf74701c5142eed2ff346135484790de72014" exitCode=0 Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.501183 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="65dd0b37-694b-4420-9b45-29310b975348" containerName="cinder-scheduler" containerID="cri-o://53b0bfd7b2462543f87d772d7dfa037728fcbd1160cbcca19896f5064f9a4067" gracePeriod=30 Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.501467 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-7rhp4" Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.501856 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7rhp4" event={"ID":"e9631ba6-27a2-4f46-a578-e3f4998aca10","Type":"ContainerDied","Data":"605219d351bd577a6b91bfeb9b9bf74701c5142eed2ff346135484790de72014"} Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.501886 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7rhp4" event={"ID":"e9631ba6-27a2-4f46-a578-e3f4998aca10","Type":"ContainerDied","Data":"912c493283cb1f83d11d5163328fb4119365c97d31a4de0be4e24eff1464410d"} Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.501903 4990 scope.go:117] "RemoveContainer" containerID="605219d351bd577a6b91bfeb9b9bf74701c5142eed2ff346135484790de72014" Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.502150 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="65dd0b37-694b-4420-9b45-29310b975348" containerName="probe" containerID="cri-o://7096252d13b83e4d071d02b3dd7ec8510bf0890112154f967337d7814d1d45f9" gracePeriod=30 Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.512129 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9631ba6-27a2-4f46-a578-e3f4998aca10-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e9631ba6-27a2-4f46-a578-e3f4998aca10" (UID: "e9631ba6-27a2-4f46-a578-e3f4998aca10"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.514808 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7fdcd7bc79-skn69" Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.541609 4990 scope.go:117] "RemoveContainer" containerID="8f20cd4f181abeb21ac324c1418e78d12452514563f0dfc9cc1c3f430fc9ef64" Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.547013 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9631ba6-27a2-4f46-a578-e3f4998aca10-config" (OuterVolumeSpecName: "config") pod "e9631ba6-27a2-4f46-a578-e3f4998aca10" (UID: "e9631ba6-27a2-4f46-a578-e3f4998aca10"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.557191 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9631ba6-27a2-4f46-a578-e3f4998aca10-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e9631ba6-27a2-4f46-a578-e3f4998aca10" (UID: "e9631ba6-27a2-4f46-a578-e3f4998aca10"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.585021 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9631ba6-27a2-4f46-a578-e3f4998aca10-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e9631ba6-27a2-4f46-a578-e3f4998aca10" (UID: "e9631ba6-27a2-4f46-a578-e3f4998aca10"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.585523 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9631ba6-27a2-4f46-a578-e3f4998aca10-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e9631ba6-27a2-4f46-a578-e3f4998aca10" (UID: "e9631ba6-27a2-4f46-a578-e3f4998aca10"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.586678 4990 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e9631ba6-27a2-4f46-a578-e3f4998aca10-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.586701 4990 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9631ba6-27a2-4f46-a578-e3f4998aca10-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.586710 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9631ba6-27a2-4f46-a578-e3f4998aca10-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.586721 4990 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9631ba6-27a2-4f46-a578-e3f4998aca10-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.586729 4990 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9631ba6-27a2-4f46-a578-e3f4998aca10-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.604635 4990 scope.go:117] "RemoveContainer" containerID="605219d351bd577a6b91bfeb9b9bf74701c5142eed2ff346135484790de72014" Dec 05 01:32:53 crc kubenswrapper[4990]: E1205 01:32:53.605057 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"605219d351bd577a6b91bfeb9b9bf74701c5142eed2ff346135484790de72014\": container with ID starting with 605219d351bd577a6b91bfeb9b9bf74701c5142eed2ff346135484790de72014 not found: ID does not exist" containerID="605219d351bd577a6b91bfeb9b9bf74701c5142eed2ff346135484790de72014" Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.605184 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"605219d351bd577a6b91bfeb9b9bf74701c5142eed2ff346135484790de72014"} err="failed to get container status \"605219d351bd577a6b91bfeb9b9bf74701c5142eed2ff346135484790de72014\": rpc error: code = NotFound desc = could not find container \"605219d351bd577a6b91bfeb9b9bf74701c5142eed2ff346135484790de72014\": container with ID starting with 605219d351bd577a6b91bfeb9b9bf74701c5142eed2ff346135484790de72014 not found: ID does not exist" Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.605385 4990 scope.go:117] "RemoveContainer" containerID="8f20cd4f181abeb21ac324c1418e78d12452514563f0dfc9cc1c3f430fc9ef64" Dec 05 01:32:53 crc kubenswrapper[4990]: E1205 01:32:53.606073 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f20cd4f181abeb21ac324c1418e78d12452514563f0dfc9cc1c3f430fc9ef64\": container with ID starting with 8f20cd4f181abeb21ac324c1418e78d12452514563f0dfc9cc1c3f430fc9ef64 not found: ID does not exist" containerID="8f20cd4f181abeb21ac324c1418e78d12452514563f0dfc9cc1c3f430fc9ef64" Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.606109 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f20cd4f181abeb21ac324c1418e78d12452514563f0dfc9cc1c3f430fc9ef64"} err="failed to get container status \"8f20cd4f181abeb21ac324c1418e78d12452514563f0dfc9cc1c3f430fc9ef64\": rpc error: code = NotFound desc = could not find container \"8f20cd4f181abeb21ac324c1418e78d12452514563f0dfc9cc1c3f430fc9ef64\": container with ID starting with 8f20cd4f181abeb21ac324c1418e78d12452514563f0dfc9cc1c3f430fc9ef64 not found: ID does not exist" Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.609967 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-64869d6796-xppnk"] Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.610162 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-64869d6796-xppnk" podUID="b7d9b4ac-28a9-4f92-9313-f93dc53ca476" containerName="neutron-api" containerID="cri-o://1496fbcecadbe5464193dcd2e14f7eb38eb8b68ae3af3b4d268b5721ac40fd37" gracePeriod=30 Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.611317 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-64869d6796-xppnk" podUID="b7d9b4ac-28a9-4f92-9313-f93dc53ca476" containerName="neutron-httpd" containerID="cri-o://fd26b786c1fe7a9d575a7b911d2719d38d86e70a87bd57838a1d6538234914e4" gracePeriod=30 Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.834691 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7rhp4"] Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.842678 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7rhp4"] Dec 05 01:32:53 crc kubenswrapper[4990]: I1205 01:32:53.945576 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9631ba6-27a2-4f46-a578-e3f4998aca10" path="/var/lib/kubelet/pods/e9631ba6-27a2-4f46-a578-e3f4998aca10/volumes" Dec 05 01:32:54 crc kubenswrapper[4990]: I1205 01:32:54.512808 4990 generic.go:334] "Generic (PLEG): container finished" podID="b7d9b4ac-28a9-4f92-9313-f93dc53ca476" containerID="fd26b786c1fe7a9d575a7b911d2719d38d86e70a87bd57838a1d6538234914e4" exitCode=0 Dec 05 01:32:54 crc kubenswrapper[4990]: I1205 01:32:54.512889 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64869d6796-xppnk" event={"ID":"b7d9b4ac-28a9-4f92-9313-f93dc53ca476","Type":"ContainerDied","Data":"fd26b786c1fe7a9d575a7b911d2719d38d86e70a87bd57838a1d6538234914e4"} Dec 05 01:32:54 crc kubenswrapper[4990]: I1205 01:32:54.516610 4990 generic.go:334] "Generic (PLEG): container finished" podID="65dd0b37-694b-4420-9b45-29310b975348" containerID="7096252d13b83e4d071d02b3dd7ec8510bf0890112154f967337d7814d1d45f9" exitCode=0 Dec 05 01:32:54 crc kubenswrapper[4990]: I1205 01:32:54.516635 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"65dd0b37-694b-4420-9b45-29310b975348","Type":"ContainerDied","Data":"7096252d13b83e4d071d02b3dd7ec8510bf0890112154f967337d7814d1d45f9"} Dec 05 01:32:57 crc kubenswrapper[4990]: I1205 01:32:57.548464 4990 generic.go:334] "Generic (PLEG): container finished" podID="65dd0b37-694b-4420-9b45-29310b975348" containerID="53b0bfd7b2462543f87d772d7dfa037728fcbd1160cbcca19896f5064f9a4067" exitCode=0 Dec 05 01:32:57 crc kubenswrapper[4990]: I1205 01:32:57.548507 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"65dd0b37-694b-4420-9b45-29310b975348","Type":"ContainerDied","Data":"53b0bfd7b2462543f87d772d7dfa037728fcbd1160cbcca19896f5064f9a4067"} Dec 05 01:32:57 crc kubenswrapper[4990]: I1205 01:32:57.549991 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"65dd0b37-694b-4420-9b45-29310b975348","Type":"ContainerDied","Data":"b44ca91e900e77523a8f24cc1aa6d88b18faecb1b4d5ef6d680836af742492a3"} Dec 05 01:32:57 crc kubenswrapper[4990]: I1205 01:32:57.550084 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b44ca91e900e77523a8f24cc1aa6d88b18faecb1b4d5ef6d680836af742492a3" Dec 05 01:32:57 crc kubenswrapper[4990]: I1205 01:32:57.569309 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 01:32:57 crc kubenswrapper[4990]: I1205 01:32:57.663705 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65dd0b37-694b-4420-9b45-29310b975348-combined-ca-bundle\") pod \"65dd0b37-694b-4420-9b45-29310b975348\" (UID: \"65dd0b37-694b-4420-9b45-29310b975348\") " Dec 05 01:32:57 crc kubenswrapper[4990]: I1205 01:32:57.663814 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/65dd0b37-694b-4420-9b45-29310b975348-etc-machine-id\") pod \"65dd0b37-694b-4420-9b45-29310b975348\" (UID: \"65dd0b37-694b-4420-9b45-29310b975348\") " Dec 05 01:32:57 crc kubenswrapper[4990]: I1205 01:32:57.663903 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlkkw\" (UniqueName: \"kubernetes.io/projected/65dd0b37-694b-4420-9b45-29310b975348-kube-api-access-nlkkw\") pod \"65dd0b37-694b-4420-9b45-29310b975348\" (UID: \"65dd0b37-694b-4420-9b45-29310b975348\") " Dec 05 01:32:57 crc kubenswrapper[4990]: I1205 01:32:57.663932 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65dd0b37-694b-4420-9b45-29310b975348-scripts\") pod \"65dd0b37-694b-4420-9b45-29310b975348\" (UID: \"65dd0b37-694b-4420-9b45-29310b975348\") " Dec 05 01:32:57 crc kubenswrapper[4990]: I1205 01:32:57.663996 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65dd0b37-694b-4420-9b45-29310b975348-config-data\") pod \"65dd0b37-694b-4420-9b45-29310b975348\" (UID: \"65dd0b37-694b-4420-9b45-29310b975348\") " Dec 05 01:32:57 crc kubenswrapper[4990]: I1205 01:32:57.664014 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65dd0b37-694b-4420-9b45-29310b975348-config-data-custom\") pod \"65dd0b37-694b-4420-9b45-29310b975348\" (UID: \"65dd0b37-694b-4420-9b45-29310b975348\") " Dec 05 01:32:57 crc kubenswrapper[4990]: I1205 01:32:57.665308 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65dd0b37-694b-4420-9b45-29310b975348-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "65dd0b37-694b-4420-9b45-29310b975348" (UID: "65dd0b37-694b-4420-9b45-29310b975348"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:32:57 crc kubenswrapper[4990]: I1205 01:32:57.672707 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65dd0b37-694b-4420-9b45-29310b975348-kube-api-access-nlkkw" (OuterVolumeSpecName: "kube-api-access-nlkkw") pod "65dd0b37-694b-4420-9b45-29310b975348" (UID: "65dd0b37-694b-4420-9b45-29310b975348"). InnerVolumeSpecName "kube-api-access-nlkkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:32:57 crc kubenswrapper[4990]: I1205 01:32:57.672843 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65dd0b37-694b-4420-9b45-29310b975348-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "65dd0b37-694b-4420-9b45-29310b975348" (UID: "65dd0b37-694b-4420-9b45-29310b975348"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:57 crc kubenswrapper[4990]: I1205 01:32:57.678297 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65dd0b37-694b-4420-9b45-29310b975348-scripts" (OuterVolumeSpecName: "scripts") pod "65dd0b37-694b-4420-9b45-29310b975348" (UID: "65dd0b37-694b-4420-9b45-29310b975348"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:57 crc kubenswrapper[4990]: I1205 01:32:57.706459 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5dc9df8c96-j8dx7" Dec 05 01:32:57 crc kubenswrapper[4990]: I1205 01:32:57.745832 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65dd0b37-694b-4420-9b45-29310b975348-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65dd0b37-694b-4420-9b45-29310b975348" (UID: "65dd0b37-694b-4420-9b45-29310b975348"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:57 crc kubenswrapper[4990]: I1205 01:32:57.768607 4990 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/65dd0b37-694b-4420-9b45-29310b975348-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:57 crc kubenswrapper[4990]: I1205 01:32:57.768637 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlkkw\" (UniqueName: \"kubernetes.io/projected/65dd0b37-694b-4420-9b45-29310b975348-kube-api-access-nlkkw\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:57 crc kubenswrapper[4990]: I1205 01:32:57.768647 4990 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65dd0b37-694b-4420-9b45-29310b975348-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:57 crc kubenswrapper[4990]: I1205 01:32:57.768657 4990 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65dd0b37-694b-4420-9b45-29310b975348-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:57 crc kubenswrapper[4990]: I1205 01:32:57.768665 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65dd0b37-694b-4420-9b45-29310b975348-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:57 crc kubenswrapper[4990]: I1205 01:32:57.823574 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65dd0b37-694b-4420-9b45-29310b975348-config-data" (OuterVolumeSpecName: "config-data") pod "65dd0b37-694b-4420-9b45-29310b975348" (UID: "65dd0b37-694b-4420-9b45-29310b975348"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:57 crc kubenswrapper[4990]: I1205 01:32:57.906140 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65dd0b37-694b-4420-9b45-29310b975348-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.456808 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64869d6796-xppnk" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.516061 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b7d9b4ac-28a9-4f92-9313-f93dc53ca476-config\") pod \"b7d9b4ac-28a9-4f92-9313-f93dc53ca476\" (UID: \"b7d9b4ac-28a9-4f92-9313-f93dc53ca476\") " Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.516385 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7d9b4ac-28a9-4f92-9313-f93dc53ca476-combined-ca-bundle\") pod \"b7d9b4ac-28a9-4f92-9313-f93dc53ca476\" (UID: \"b7d9b4ac-28a9-4f92-9313-f93dc53ca476\") " Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.516413 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7d9b4ac-28a9-4f92-9313-f93dc53ca476-ovndb-tls-certs\") pod \"b7d9b4ac-28a9-4f92-9313-f93dc53ca476\" (UID: \"b7d9b4ac-28a9-4f92-9313-f93dc53ca476\") " Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.516879 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b7d9b4ac-28a9-4f92-9313-f93dc53ca476-httpd-config\") pod \"b7d9b4ac-28a9-4f92-9313-f93dc53ca476\" (UID: \"b7d9b4ac-28a9-4f92-9313-f93dc53ca476\") " Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.516913 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gckpw\" (UniqueName: \"kubernetes.io/projected/b7d9b4ac-28a9-4f92-9313-f93dc53ca476-kube-api-access-gckpw\") pod \"b7d9b4ac-28a9-4f92-9313-f93dc53ca476\" (UID: \"b7d9b4ac-28a9-4f92-9313-f93dc53ca476\") " Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.520072 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7d9b4ac-28a9-4f92-9313-f93dc53ca476-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "b7d9b4ac-28a9-4f92-9313-f93dc53ca476" (UID: "b7d9b4ac-28a9-4f92-9313-f93dc53ca476"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.522591 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7d9b4ac-28a9-4f92-9313-f93dc53ca476-kube-api-access-gckpw" (OuterVolumeSpecName: "kube-api-access-gckpw") pod "b7d9b4ac-28a9-4f92-9313-f93dc53ca476" (UID: "b7d9b4ac-28a9-4f92-9313-f93dc53ca476"). InnerVolumeSpecName "kube-api-access-gckpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.562251 4990 generic.go:334] "Generic (PLEG): container finished" podID="b7d9b4ac-28a9-4f92-9313-f93dc53ca476" containerID="1496fbcecadbe5464193dcd2e14f7eb38eb8b68ae3af3b4d268b5721ac40fd37" exitCode=0 Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.562430 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64869d6796-xppnk" event={"ID":"b7d9b4ac-28a9-4f92-9313-f93dc53ca476","Type":"ContainerDied","Data":"1496fbcecadbe5464193dcd2e14f7eb38eb8b68ae3af3b4d268b5721ac40fd37"} Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.563272 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64869d6796-xppnk" event={"ID":"b7d9b4ac-28a9-4f92-9313-f93dc53ca476","Type":"ContainerDied","Data":"2bec6c1886d8470eaed12b62075ff75ea960952343796d547d68718e1fd20992"} Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.563296 4990 scope.go:117] "RemoveContainer" containerID="fd26b786c1fe7a9d575a7b911d2719d38d86e70a87bd57838a1d6538234914e4" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.562547 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64869d6796-xppnk" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.563827 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.581264 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7d9b4ac-28a9-4f92-9313-f93dc53ca476-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7d9b4ac-28a9-4f92-9313-f93dc53ca476" (UID: "b7d9b4ac-28a9-4f92-9313-f93dc53ca476"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.589636 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7d9b4ac-28a9-4f92-9313-f93dc53ca476-config" (OuterVolumeSpecName: "config") pod "b7d9b4ac-28a9-4f92-9313-f93dc53ca476" (UID: "b7d9b4ac-28a9-4f92-9313-f93dc53ca476"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.593635 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7d9b4ac-28a9-4f92-9313-f93dc53ca476-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "b7d9b4ac-28a9-4f92-9313-f93dc53ca476" (UID: "b7d9b4ac-28a9-4f92-9313-f93dc53ca476"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.616647 4990 scope.go:117] "RemoveContainer" containerID="1496fbcecadbe5464193dcd2e14f7eb38eb8b68ae3af3b4d268b5721ac40fd37" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.620893 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.621773 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7d9b4ac-28a9-4f92-9313-f93dc53ca476-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.621794 4990 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7d9b4ac-28a9-4f92-9313-f93dc53ca476-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.621803 4990 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b7d9b4ac-28a9-4f92-9313-f93dc53ca476-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.621813 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gckpw\" (UniqueName: \"kubernetes.io/projected/b7d9b4ac-28a9-4f92-9313-f93dc53ca476-kube-api-access-gckpw\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.621823 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b7d9b4ac-28a9-4f92-9313-f93dc53ca476-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.628132 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.639531 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 01:32:58 crc kubenswrapper[4990]: E1205 01:32:58.639896 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9631ba6-27a2-4f46-a578-e3f4998aca10" containerName="init" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.639914 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9631ba6-27a2-4f46-a578-e3f4998aca10" containerName="init" Dec 05 01:32:58 crc kubenswrapper[4990]: E1205 01:32:58.639928 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65dd0b37-694b-4420-9b45-29310b975348" containerName="probe" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.639934 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="65dd0b37-694b-4420-9b45-29310b975348" containerName="probe" Dec 05 01:32:58 crc kubenswrapper[4990]: E1205 01:32:58.639942 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9631ba6-27a2-4f46-a578-e3f4998aca10" containerName="dnsmasq-dns" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.639948 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9631ba6-27a2-4f46-a578-e3f4998aca10" containerName="dnsmasq-dns" Dec 05 01:32:58 crc kubenswrapper[4990]: E1205 01:32:58.639965 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65dd0b37-694b-4420-9b45-29310b975348" containerName="cinder-scheduler" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.639971 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="65dd0b37-694b-4420-9b45-29310b975348" containerName="cinder-scheduler" Dec 05 01:32:58 crc kubenswrapper[4990]: E1205 01:32:58.639984 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d9b4ac-28a9-4f92-9313-f93dc53ca476" containerName="neutron-httpd" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.639990 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d9b4ac-28a9-4f92-9313-f93dc53ca476" containerName="neutron-httpd" Dec 05 01:32:58 crc kubenswrapper[4990]: E1205 01:32:58.640014 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d9b4ac-28a9-4f92-9313-f93dc53ca476" containerName="neutron-api" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.640021 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d9b4ac-28a9-4f92-9313-f93dc53ca476" containerName="neutron-api" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.640170 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7d9b4ac-28a9-4f92-9313-f93dc53ca476" containerName="neutron-httpd" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.640185 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="65dd0b37-694b-4420-9b45-29310b975348" containerName="cinder-scheduler" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.640193 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9631ba6-27a2-4f46-a578-e3f4998aca10" containerName="dnsmasq-dns" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.640203 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="65dd0b37-694b-4420-9b45-29310b975348" containerName="probe" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.640227 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7d9b4ac-28a9-4f92-9313-f93dc53ca476" containerName="neutron-api" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.641385 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.645083 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.649406 4990 scope.go:117] "RemoveContainer" containerID="fd26b786c1fe7a9d575a7b911d2719d38d86e70a87bd57838a1d6538234914e4" Dec 05 01:32:58 crc kubenswrapper[4990]: E1205 01:32:58.649820 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd26b786c1fe7a9d575a7b911d2719d38d86e70a87bd57838a1d6538234914e4\": container with ID starting with fd26b786c1fe7a9d575a7b911d2719d38d86e70a87bd57838a1d6538234914e4 not found: ID does not exist" containerID="fd26b786c1fe7a9d575a7b911d2719d38d86e70a87bd57838a1d6538234914e4" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.649844 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd26b786c1fe7a9d575a7b911d2719d38d86e70a87bd57838a1d6538234914e4"} err="failed to get container status \"fd26b786c1fe7a9d575a7b911d2719d38d86e70a87bd57838a1d6538234914e4\": rpc error: code = NotFound desc = could not find container \"fd26b786c1fe7a9d575a7b911d2719d38d86e70a87bd57838a1d6538234914e4\": container with ID starting with fd26b786c1fe7a9d575a7b911d2719d38d86e70a87bd57838a1d6538234914e4 not found: ID does not exist" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.649864 4990 scope.go:117] "RemoveContainer" containerID="1496fbcecadbe5464193dcd2e14f7eb38eb8b68ae3af3b4d268b5721ac40fd37" Dec 05 01:32:58 crc kubenswrapper[4990]: E1205 01:32:58.650025 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1496fbcecadbe5464193dcd2e14f7eb38eb8b68ae3af3b4d268b5721ac40fd37\": container with ID starting with 1496fbcecadbe5464193dcd2e14f7eb38eb8b68ae3af3b4d268b5721ac40fd37 not found: ID does not exist" containerID="1496fbcecadbe5464193dcd2e14f7eb38eb8b68ae3af3b4d268b5721ac40fd37" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.650040 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1496fbcecadbe5464193dcd2e14f7eb38eb8b68ae3af3b4d268b5721ac40fd37"} err="failed to get container status \"1496fbcecadbe5464193dcd2e14f7eb38eb8b68ae3af3b4d268b5721ac40fd37\": rpc error: code = NotFound desc = could not find container \"1496fbcecadbe5464193dcd2e14f7eb38eb8b68ae3af3b4d268b5721ac40fd37\": container with ID starting with 1496fbcecadbe5464193dcd2e14f7eb38eb8b68ae3af3b4d268b5721ac40fd37 not found: ID does not exist" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.658842 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.723023 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmb48\" (UniqueName: \"kubernetes.io/projected/9ca5e656-876c-4e87-b049-5c284b211804-kube-api-access-hmb48\") pod \"cinder-scheduler-0\" (UID: \"9ca5e656-876c-4e87-b049-5c284b211804\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.723105 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ca5e656-876c-4e87-b049-5c284b211804-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9ca5e656-876c-4e87-b049-5c284b211804\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.723164 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ca5e656-876c-4e87-b049-5c284b211804-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9ca5e656-876c-4e87-b049-5c284b211804\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.723190 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ca5e656-876c-4e87-b049-5c284b211804-config-data\") pod \"cinder-scheduler-0\" (UID: \"9ca5e656-876c-4e87-b049-5c284b211804\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.723226 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca5e656-876c-4e87-b049-5c284b211804-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9ca5e656-876c-4e87-b049-5c284b211804\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.723325 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ca5e656-876c-4e87-b049-5c284b211804-scripts\") pod \"cinder-scheduler-0\" (UID: \"9ca5e656-876c-4e87-b049-5c284b211804\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.824642 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ca5e656-876c-4e87-b049-5c284b211804-scripts\") pod \"cinder-scheduler-0\" (UID: \"9ca5e656-876c-4e87-b049-5c284b211804\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.824719 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmb48\" (UniqueName: \"kubernetes.io/projected/9ca5e656-876c-4e87-b049-5c284b211804-kube-api-access-hmb48\") pod \"cinder-scheduler-0\" (UID: \"9ca5e656-876c-4e87-b049-5c284b211804\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.824767 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ca5e656-876c-4e87-b049-5c284b211804-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9ca5e656-876c-4e87-b049-5c284b211804\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.824806 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ca5e656-876c-4e87-b049-5c284b211804-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9ca5e656-876c-4e87-b049-5c284b211804\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.824823 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ca5e656-876c-4e87-b049-5c284b211804-config-data\") pod \"cinder-scheduler-0\" (UID: \"9ca5e656-876c-4e87-b049-5c284b211804\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.824846 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca5e656-876c-4e87-b049-5c284b211804-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9ca5e656-876c-4e87-b049-5c284b211804\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.825206 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ca5e656-876c-4e87-b049-5c284b211804-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9ca5e656-876c-4e87-b049-5c284b211804\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.828746 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ca5e656-876c-4e87-b049-5c284b211804-scripts\") pod \"cinder-scheduler-0\" (UID: \"9ca5e656-876c-4e87-b049-5c284b211804\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.829012 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ca5e656-876c-4e87-b049-5c284b211804-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9ca5e656-876c-4e87-b049-5c284b211804\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.830024 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ca5e656-876c-4e87-b049-5c284b211804-config-data\") pod \"cinder-scheduler-0\" (UID: \"9ca5e656-876c-4e87-b049-5c284b211804\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.830180 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca5e656-876c-4e87-b049-5c284b211804-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9ca5e656-876c-4e87-b049-5c284b211804\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.847054 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmb48\" (UniqueName: \"kubernetes.io/projected/9ca5e656-876c-4e87-b049-5c284b211804-kube-api-access-hmb48\") pod \"cinder-scheduler-0\" (UID: \"9ca5e656-876c-4e87-b049-5c284b211804\") " pod="openstack/cinder-scheduler-0" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.859578 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-74bb84bc86-8krlf" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.899533 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-64869d6796-xppnk"] Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.906032 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-64869d6796-xppnk"] Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.965796 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 01:32:58 crc kubenswrapper[4990]: I1205 01:32:58.982216 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-74bb84bc86-8krlf" Dec 05 01:32:59 crc kubenswrapper[4990]: I1205 01:32:59.041422 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5bf5fbf9fd-qjhcn"] Dec 05 01:32:59 crc kubenswrapper[4990]: I1205 01:32:59.041919 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5bf5fbf9fd-qjhcn" podUID="e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac" containerName="barbican-api-log" containerID="cri-o://71fcec34124e4eb840f1f9ef7776afd8161ef32ae7762c975d26dae1c73a9fb9" gracePeriod=30 Dec 05 01:32:59 crc kubenswrapper[4990]: I1205 01:32:59.042296 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5bf5fbf9fd-qjhcn" podUID="e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac" containerName="barbican-api" containerID="cri-o://a1ae909933d5809e281a2c9cee1cc72d57196a6bcc78dadf82efb7dc38331d3d" gracePeriod=30 Dec 05 01:32:59 crc kubenswrapper[4990]: I1205 01:32:59.486944 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 01:32:59 crc kubenswrapper[4990]: W1205 01:32:59.496943 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ca5e656_876c_4e87_b049_5c284b211804.slice/crio-569b2fa4a9d7035a8cf8d7d0f2ceeef59a09e8c13c6717e7f2cff84863304275 WatchSource:0}: Error finding container 569b2fa4a9d7035a8cf8d7d0f2ceeef59a09e8c13c6717e7f2cff84863304275: Status 404 returned error can't find the container with id 569b2fa4a9d7035a8cf8d7d0f2ceeef59a09e8c13c6717e7f2cff84863304275 Dec 05 01:32:59 crc kubenswrapper[4990]: I1205 01:32:59.593237 4990 generic.go:334] "Generic (PLEG): container finished" podID="e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac" containerID="71fcec34124e4eb840f1f9ef7776afd8161ef32ae7762c975d26dae1c73a9fb9" exitCode=143 Dec 05 01:32:59 crc kubenswrapper[4990]: I1205 01:32:59.593323 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5bf5fbf9fd-qjhcn" event={"ID":"e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac","Type":"ContainerDied","Data":"71fcec34124e4eb840f1f9ef7776afd8161ef32ae7762c975d26dae1c73a9fb9"} Dec 05 01:32:59 crc kubenswrapper[4990]: I1205 01:32:59.595125 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9ca5e656-876c-4e87-b049-5c284b211804","Type":"ContainerStarted","Data":"569b2fa4a9d7035a8cf8d7d0f2ceeef59a09e8c13c6717e7f2cff84863304275"} Dec 05 01:32:59 crc kubenswrapper[4990]: I1205 01:32:59.832937 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 05 01:32:59 crc kubenswrapper[4990]: I1205 01:32:59.945009 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65dd0b37-694b-4420-9b45-29310b975348" path="/var/lib/kubelet/pods/65dd0b37-694b-4420-9b45-29310b975348/volumes" Dec 05 01:32:59 crc kubenswrapper[4990]: I1205 01:32:59.946222 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7d9b4ac-28a9-4f92-9313-f93dc53ca476" path="/var/lib/kubelet/pods/b7d9b4ac-28a9-4f92-9313-f93dc53ca476/volumes" Dec 05 01:33:00 crc kubenswrapper[4990]: I1205 01:33:00.254009 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 05 01:33:00 crc kubenswrapper[4990]: I1205 01:33:00.255690 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 05 01:33:00 crc kubenswrapper[4990]: I1205 01:33:00.263763 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 05 01:33:00 crc kubenswrapper[4990]: I1205 01:33:00.264067 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-zpsj9" Dec 05 01:33:00 crc kubenswrapper[4990]: I1205 01:33:00.273743 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 05 01:33:00 crc kubenswrapper[4990]: I1205 01:33:00.285306 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 05 01:33:00 crc kubenswrapper[4990]: I1205 01:33:00.364820 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f19ad196-b05b-4ade-ba2b-3b532d447f8e-openstack-config\") pod \"openstackclient\" (UID: \"f19ad196-b05b-4ade-ba2b-3b532d447f8e\") " pod="openstack/openstackclient" Dec 05 01:33:00 crc kubenswrapper[4990]: I1205 01:33:00.364864 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tschq\" (UniqueName: \"kubernetes.io/projected/f19ad196-b05b-4ade-ba2b-3b532d447f8e-kube-api-access-tschq\") pod \"openstackclient\" (UID: \"f19ad196-b05b-4ade-ba2b-3b532d447f8e\") " pod="openstack/openstackclient" Dec 05 01:33:00 crc kubenswrapper[4990]: I1205 01:33:00.364999 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f19ad196-b05b-4ade-ba2b-3b532d447f8e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f19ad196-b05b-4ade-ba2b-3b532d447f8e\") " pod="openstack/openstackclient" Dec 05 01:33:00 crc kubenswrapper[4990]: I1205 01:33:00.365027 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f19ad196-b05b-4ade-ba2b-3b532d447f8e-openstack-config-secret\") pod \"openstackclient\" (UID: \"f19ad196-b05b-4ade-ba2b-3b532d447f8e\") " pod="openstack/openstackclient" Dec 05 01:33:00 crc kubenswrapper[4990]: I1205 01:33:00.466272 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f19ad196-b05b-4ade-ba2b-3b532d447f8e-openstack-config\") pod \"openstackclient\" (UID: \"f19ad196-b05b-4ade-ba2b-3b532d447f8e\") " pod="openstack/openstackclient" Dec 05 01:33:00 crc kubenswrapper[4990]: I1205 01:33:00.466330 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tschq\" (UniqueName: \"kubernetes.io/projected/f19ad196-b05b-4ade-ba2b-3b532d447f8e-kube-api-access-tschq\") pod \"openstackclient\" (UID: \"f19ad196-b05b-4ade-ba2b-3b532d447f8e\") " pod="openstack/openstackclient" Dec 05 01:33:00 crc kubenswrapper[4990]: I1205 01:33:00.466467 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f19ad196-b05b-4ade-ba2b-3b532d447f8e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f19ad196-b05b-4ade-ba2b-3b532d447f8e\") " pod="openstack/openstackclient" Dec 05 01:33:00 crc kubenswrapper[4990]: I1205 01:33:00.466512 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f19ad196-b05b-4ade-ba2b-3b532d447f8e-openstack-config-secret\") pod \"openstackclient\" (UID: \"f19ad196-b05b-4ade-ba2b-3b532d447f8e\") " pod="openstack/openstackclient" Dec 05 01:33:00 crc kubenswrapper[4990]: I1205 01:33:00.467435 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f19ad196-b05b-4ade-ba2b-3b532d447f8e-openstack-config\") pod \"openstackclient\" (UID: \"f19ad196-b05b-4ade-ba2b-3b532d447f8e\") " pod="openstack/openstackclient" Dec 05 01:33:00 crc kubenswrapper[4990]: I1205 01:33:00.470521 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f19ad196-b05b-4ade-ba2b-3b532d447f8e-openstack-config-secret\") pod \"openstackclient\" (UID: \"f19ad196-b05b-4ade-ba2b-3b532d447f8e\") " pod="openstack/openstackclient" Dec 05 01:33:00 crc kubenswrapper[4990]: I1205 01:33:00.472209 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f19ad196-b05b-4ade-ba2b-3b532d447f8e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f19ad196-b05b-4ade-ba2b-3b532d447f8e\") " pod="openstack/openstackclient" Dec 05 01:33:00 crc kubenswrapper[4990]: I1205 01:33:00.482927 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tschq\" (UniqueName: \"kubernetes.io/projected/f19ad196-b05b-4ade-ba2b-3b532d447f8e-kube-api-access-tschq\") pod \"openstackclient\" (UID: \"f19ad196-b05b-4ade-ba2b-3b532d447f8e\") " pod="openstack/openstackclient" Dec 05 01:33:00 crc kubenswrapper[4990]: I1205 01:33:00.598277 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 05 01:33:00 crc kubenswrapper[4990]: I1205 01:33:00.635581 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9ca5e656-876c-4e87-b049-5c284b211804","Type":"ContainerStarted","Data":"a95fb654a91d91d71d02b450e0f66b15586b768fefea64199189813f9b0c58c3"} Dec 05 01:33:01 crc kubenswrapper[4990]: I1205 01:33:01.101009 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 05 01:33:01 crc kubenswrapper[4990]: W1205 01:33:01.110873 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf19ad196_b05b_4ade_ba2b_3b532d447f8e.slice/crio-030b8c538b230ec4a0620af592773169617118ddc79c594fd924289b8c21dadd WatchSource:0}: Error finding container 030b8c538b230ec4a0620af592773169617118ddc79c594fd924289b8c21dadd: Status 404 returned error can't find the container with id 030b8c538b230ec4a0620af592773169617118ddc79c594fd924289b8c21dadd Dec 05 01:33:01 crc kubenswrapper[4990]: I1205 01:33:01.643562 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f19ad196-b05b-4ade-ba2b-3b532d447f8e","Type":"ContainerStarted","Data":"030b8c538b230ec4a0620af592773169617118ddc79c594fd924289b8c21dadd"} Dec 05 01:33:01 crc kubenswrapper[4990]: I1205 01:33:01.645019 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9ca5e656-876c-4e87-b049-5c284b211804","Type":"ContainerStarted","Data":"7fa77ce9f772e359c9e72bc3678b54f2352122e3c3b7171d426cd33a61531716"} Dec 05 01:33:01 crc kubenswrapper[4990]: I1205 01:33:01.679730 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.679709718 podStartE2EDuration="3.679709718s" podCreationTimestamp="2025-12-05 01:32:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:33:01.672361669 +0000 UTC m=+1480.048577030" watchObservedRunningTime="2025-12-05 01:33:01.679709718 +0000 UTC m=+1480.055925079" Dec 05 01:33:02 crc kubenswrapper[4990]: I1205 01:33:02.204959 4990 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5bf5fbf9fd-qjhcn" podUID="e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.155:9311/healthcheck\": read tcp 10.217.0.2:57886->10.217.0.155:9311: read: connection reset by peer" Dec 05 01:33:02 crc kubenswrapper[4990]: I1205 01:33:02.204973 4990 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5bf5fbf9fd-qjhcn" podUID="e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.155:9311/healthcheck\": read tcp 10.217.0.2:57902->10.217.0.155:9311: read: connection reset by peer" Dec 05 01:33:02 crc kubenswrapper[4990]: I1205 01:33:02.602248 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5bf5fbf9fd-qjhcn" Dec 05 01:33:02 crc kubenswrapper[4990]: I1205 01:33:02.657024 4990 generic.go:334] "Generic (PLEG): container finished" podID="e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac" containerID="a1ae909933d5809e281a2c9cee1cc72d57196a6bcc78dadf82efb7dc38331d3d" exitCode=0 Dec 05 01:33:02 crc kubenswrapper[4990]: I1205 01:33:02.657921 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5bf5fbf9fd-qjhcn" Dec 05 01:33:02 crc kubenswrapper[4990]: I1205 01:33:02.658348 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5bf5fbf9fd-qjhcn" event={"ID":"e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac","Type":"ContainerDied","Data":"a1ae909933d5809e281a2c9cee1cc72d57196a6bcc78dadf82efb7dc38331d3d"} Dec 05 01:33:02 crc kubenswrapper[4990]: I1205 01:33:02.658382 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5bf5fbf9fd-qjhcn" event={"ID":"e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac","Type":"ContainerDied","Data":"252bf3fc6e090dd737d68dee68f091e98d546e7e23c8a1e06466c291b7d83051"} Dec 05 01:33:02 crc kubenswrapper[4990]: I1205 01:33:02.658402 4990 scope.go:117] "RemoveContainer" containerID="a1ae909933d5809e281a2c9cee1cc72d57196a6bcc78dadf82efb7dc38331d3d" Dec 05 01:33:02 crc kubenswrapper[4990]: I1205 01:33:02.693124 4990 scope.go:117] "RemoveContainer" containerID="71fcec34124e4eb840f1f9ef7776afd8161ef32ae7762c975d26dae1c73a9fb9" Dec 05 01:33:02 crc kubenswrapper[4990]: I1205 01:33:02.705385 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac-config-data-custom\") pod \"e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac\" (UID: \"e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac\") " Dec 05 01:33:02 crc kubenswrapper[4990]: I1205 01:33:02.705440 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac-combined-ca-bundle\") pod \"e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac\" (UID: \"e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac\") " Dec 05 01:33:02 crc kubenswrapper[4990]: I1205 01:33:02.705510 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac-config-data\") pod \"e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac\" (UID: \"e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac\") " Dec 05 01:33:02 crc kubenswrapper[4990]: I1205 01:33:02.705645 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac-logs\") pod \"e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac\" (UID: \"e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac\") " Dec 05 01:33:02 crc kubenswrapper[4990]: I1205 01:33:02.705714 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvz95\" (UniqueName: \"kubernetes.io/projected/e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac-kube-api-access-kvz95\") pod \"e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac\" (UID: \"e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac\") " Dec 05 01:33:02 crc kubenswrapper[4990]: I1205 01:33:02.705968 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac-logs" (OuterVolumeSpecName: "logs") pod "e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac" (UID: "e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:33:02 crc kubenswrapper[4990]: I1205 01:33:02.706538 4990 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac-logs\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:02 crc kubenswrapper[4990]: I1205 01:33:02.710654 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac" (UID: "e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:33:02 crc kubenswrapper[4990]: I1205 01:33:02.712410 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac-kube-api-access-kvz95" (OuterVolumeSpecName: "kube-api-access-kvz95") pod "e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac" (UID: "e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac"). InnerVolumeSpecName "kube-api-access-kvz95". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:33:02 crc kubenswrapper[4990]: I1205 01:33:02.713695 4990 scope.go:117] "RemoveContainer" containerID="a1ae909933d5809e281a2c9cee1cc72d57196a6bcc78dadf82efb7dc38331d3d" Dec 05 01:33:02 crc kubenswrapper[4990]: E1205 01:33:02.714028 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1ae909933d5809e281a2c9cee1cc72d57196a6bcc78dadf82efb7dc38331d3d\": container with ID starting with a1ae909933d5809e281a2c9cee1cc72d57196a6bcc78dadf82efb7dc38331d3d not found: ID does not exist" containerID="a1ae909933d5809e281a2c9cee1cc72d57196a6bcc78dadf82efb7dc38331d3d" Dec 05 01:33:02 crc kubenswrapper[4990]: I1205 01:33:02.714057 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1ae909933d5809e281a2c9cee1cc72d57196a6bcc78dadf82efb7dc38331d3d"} err="failed to get container status \"a1ae909933d5809e281a2c9cee1cc72d57196a6bcc78dadf82efb7dc38331d3d\": rpc error: code = NotFound desc = could not find container \"a1ae909933d5809e281a2c9cee1cc72d57196a6bcc78dadf82efb7dc38331d3d\": container with ID starting with a1ae909933d5809e281a2c9cee1cc72d57196a6bcc78dadf82efb7dc38331d3d not found: ID does not exist" Dec 05 01:33:02 crc kubenswrapper[4990]: I1205 01:33:02.714082 4990 scope.go:117] "RemoveContainer" containerID="71fcec34124e4eb840f1f9ef7776afd8161ef32ae7762c975d26dae1c73a9fb9" Dec 05 01:33:02 crc kubenswrapper[4990]: E1205 01:33:02.714447 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71fcec34124e4eb840f1f9ef7776afd8161ef32ae7762c975d26dae1c73a9fb9\": container with ID starting with 71fcec34124e4eb840f1f9ef7776afd8161ef32ae7762c975d26dae1c73a9fb9 not found: ID does not exist" containerID="71fcec34124e4eb840f1f9ef7776afd8161ef32ae7762c975d26dae1c73a9fb9" Dec 05 01:33:02 crc kubenswrapper[4990]: I1205 01:33:02.714488 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71fcec34124e4eb840f1f9ef7776afd8161ef32ae7762c975d26dae1c73a9fb9"} err="failed to get container status \"71fcec34124e4eb840f1f9ef7776afd8161ef32ae7762c975d26dae1c73a9fb9\": rpc error: code = NotFound desc = could not find container \"71fcec34124e4eb840f1f9ef7776afd8161ef32ae7762c975d26dae1c73a9fb9\": container with ID starting with 71fcec34124e4eb840f1f9ef7776afd8161ef32ae7762c975d26dae1c73a9fb9 not found: ID does not exist" Dec 05 01:33:02 crc kubenswrapper[4990]: I1205 01:33:02.746237 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac" (UID: "e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:33:02 crc kubenswrapper[4990]: I1205 01:33:02.751242 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac-config-data" (OuterVolumeSpecName: "config-data") pod "e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac" (UID: "e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:33:02 crc kubenswrapper[4990]: I1205 01:33:02.808267 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvz95\" (UniqueName: \"kubernetes.io/projected/e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac-kube-api-access-kvz95\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:02 crc kubenswrapper[4990]: I1205 01:33:02.808305 4990 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:02 crc kubenswrapper[4990]: I1205 01:33:02.808317 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:02 crc kubenswrapper[4990]: I1205 01:33:02.808330 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:02 crc kubenswrapper[4990]: I1205 01:33:02.991607 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5bf5fbf9fd-qjhcn"] Dec 05 01:33:02 crc kubenswrapper[4990]: I1205 01:33:02.998790 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5bf5fbf9fd-qjhcn"] Dec 05 01:33:03 crc kubenswrapper[4990]: I1205 01:33:03.967980 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac" path="/var/lib/kubelet/pods/e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac/volumes" Dec 05 01:33:03 crc kubenswrapper[4990]: I1205 01:33:03.968706 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.393385 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-84997d8dc-hzdlp"] Dec 05 01:33:04 crc kubenswrapper[4990]: E1205 01:33:04.394334 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac" containerName="barbican-api-log" Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.394353 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac" containerName="barbican-api-log" Dec 05 01:33:04 crc kubenswrapper[4990]: E1205 01:33:04.394374 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac" containerName="barbican-api" Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.394380 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac" containerName="barbican-api" Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.394566 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac" containerName="barbican-api-log" Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.394591 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3c90b6d-ca47-4aa1-8e6a-ffad04e303ac" containerName="barbican-api" Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.395542 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-84997d8dc-hzdlp" Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.399738 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.399784 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.399743 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.410282 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-84997d8dc-hzdlp"] Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.439840 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb029546-9d20-445a-9926-2a43c235a755-public-tls-certs\") pod \"swift-proxy-84997d8dc-hzdlp\" (UID: \"bb029546-9d20-445a-9926-2a43c235a755\") " pod="openstack/swift-proxy-84997d8dc-hzdlp" Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.439899 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb029546-9d20-445a-9926-2a43c235a755-combined-ca-bundle\") pod \"swift-proxy-84997d8dc-hzdlp\" (UID: \"bb029546-9d20-445a-9926-2a43c235a755\") " pod="openstack/swift-proxy-84997d8dc-hzdlp" Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.439922 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb029546-9d20-445a-9926-2a43c235a755-run-httpd\") pod \"swift-proxy-84997d8dc-hzdlp\" (UID: \"bb029546-9d20-445a-9926-2a43c235a755\") " pod="openstack/swift-proxy-84997d8dc-hzdlp" Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.440100 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb029546-9d20-445a-9926-2a43c235a755-log-httpd\") pod \"swift-proxy-84997d8dc-hzdlp\" (UID: \"bb029546-9d20-445a-9926-2a43c235a755\") " pod="openstack/swift-proxy-84997d8dc-hzdlp" Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.440229 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bb029546-9d20-445a-9926-2a43c235a755-etc-swift\") pod \"swift-proxy-84997d8dc-hzdlp\" (UID: \"bb029546-9d20-445a-9926-2a43c235a755\") " pod="openstack/swift-proxy-84997d8dc-hzdlp" Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.440296 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb029546-9d20-445a-9926-2a43c235a755-config-data\") pod \"swift-proxy-84997d8dc-hzdlp\" (UID: \"bb029546-9d20-445a-9926-2a43c235a755\") " pod="openstack/swift-proxy-84997d8dc-hzdlp" Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.440318 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb029546-9d20-445a-9926-2a43c235a755-internal-tls-certs\") pod \"swift-proxy-84997d8dc-hzdlp\" (UID: \"bb029546-9d20-445a-9926-2a43c235a755\") " pod="openstack/swift-proxy-84997d8dc-hzdlp" Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.440384 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmzzd\" (UniqueName: \"kubernetes.io/projected/bb029546-9d20-445a-9926-2a43c235a755-kube-api-access-kmzzd\") pod \"swift-proxy-84997d8dc-hzdlp\" (UID: \"bb029546-9d20-445a-9926-2a43c235a755\") " pod="openstack/swift-proxy-84997d8dc-hzdlp" Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.542593 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb029546-9d20-445a-9926-2a43c235a755-config-data\") pod \"swift-proxy-84997d8dc-hzdlp\" (UID: \"bb029546-9d20-445a-9926-2a43c235a755\") " pod="openstack/swift-proxy-84997d8dc-hzdlp" Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.542631 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb029546-9d20-445a-9926-2a43c235a755-internal-tls-certs\") pod \"swift-proxy-84997d8dc-hzdlp\" (UID: \"bb029546-9d20-445a-9926-2a43c235a755\") " pod="openstack/swift-proxy-84997d8dc-hzdlp" Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.542667 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmzzd\" (UniqueName: \"kubernetes.io/projected/bb029546-9d20-445a-9926-2a43c235a755-kube-api-access-kmzzd\") pod \"swift-proxy-84997d8dc-hzdlp\" (UID: \"bb029546-9d20-445a-9926-2a43c235a755\") " pod="openstack/swift-proxy-84997d8dc-hzdlp" Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.542718 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb029546-9d20-445a-9926-2a43c235a755-public-tls-certs\") pod \"swift-proxy-84997d8dc-hzdlp\" (UID: \"bb029546-9d20-445a-9926-2a43c235a755\") " pod="openstack/swift-proxy-84997d8dc-hzdlp" Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.542750 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb029546-9d20-445a-9926-2a43c235a755-combined-ca-bundle\") pod \"swift-proxy-84997d8dc-hzdlp\" (UID: \"bb029546-9d20-445a-9926-2a43c235a755\") " pod="openstack/swift-proxy-84997d8dc-hzdlp" Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.542768 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb029546-9d20-445a-9926-2a43c235a755-run-httpd\") pod \"swift-proxy-84997d8dc-hzdlp\" (UID: \"bb029546-9d20-445a-9926-2a43c235a755\") " pod="openstack/swift-proxy-84997d8dc-hzdlp" Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.542807 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb029546-9d20-445a-9926-2a43c235a755-log-httpd\") pod \"swift-proxy-84997d8dc-hzdlp\" (UID: \"bb029546-9d20-445a-9926-2a43c235a755\") " pod="openstack/swift-proxy-84997d8dc-hzdlp" Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.542848 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bb029546-9d20-445a-9926-2a43c235a755-etc-swift\") pod \"swift-proxy-84997d8dc-hzdlp\" (UID: \"bb029546-9d20-445a-9926-2a43c235a755\") " pod="openstack/swift-proxy-84997d8dc-hzdlp" Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.543822 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb029546-9d20-445a-9926-2a43c235a755-run-httpd\") pod \"swift-proxy-84997d8dc-hzdlp\" (UID: \"bb029546-9d20-445a-9926-2a43c235a755\") " pod="openstack/swift-proxy-84997d8dc-hzdlp" Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.544139 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb029546-9d20-445a-9926-2a43c235a755-log-httpd\") pod \"swift-proxy-84997d8dc-hzdlp\" (UID: \"bb029546-9d20-445a-9926-2a43c235a755\") " pod="openstack/swift-proxy-84997d8dc-hzdlp" Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.550039 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bb029546-9d20-445a-9926-2a43c235a755-etc-swift\") pod \"swift-proxy-84997d8dc-hzdlp\" (UID: \"bb029546-9d20-445a-9926-2a43c235a755\") " pod="openstack/swift-proxy-84997d8dc-hzdlp" Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.550245 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb029546-9d20-445a-9926-2a43c235a755-config-data\") pod \"swift-proxy-84997d8dc-hzdlp\" (UID: \"bb029546-9d20-445a-9926-2a43c235a755\") " pod="openstack/swift-proxy-84997d8dc-hzdlp" Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.553144 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb029546-9d20-445a-9926-2a43c235a755-combined-ca-bundle\") pod \"swift-proxy-84997d8dc-hzdlp\" (UID: \"bb029546-9d20-445a-9926-2a43c235a755\") " pod="openstack/swift-proxy-84997d8dc-hzdlp" Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.554622 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb029546-9d20-445a-9926-2a43c235a755-internal-tls-certs\") pod \"swift-proxy-84997d8dc-hzdlp\" (UID: \"bb029546-9d20-445a-9926-2a43c235a755\") " pod="openstack/swift-proxy-84997d8dc-hzdlp" Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.556340 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb029546-9d20-445a-9926-2a43c235a755-public-tls-certs\") pod \"swift-proxy-84997d8dc-hzdlp\" (UID: \"bb029546-9d20-445a-9926-2a43c235a755\") " pod="openstack/swift-proxy-84997d8dc-hzdlp" Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.562161 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmzzd\" (UniqueName: \"kubernetes.io/projected/bb029546-9d20-445a-9926-2a43c235a755-kube-api-access-kmzzd\") pod \"swift-proxy-84997d8dc-hzdlp\" (UID: \"bb029546-9d20-445a-9926-2a43c235a755\") " pod="openstack/swift-proxy-84997d8dc-hzdlp" Dec 05 01:33:04 crc kubenswrapper[4990]: I1205 01:33:04.725926 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-84997d8dc-hzdlp" Dec 05 01:33:05 crc kubenswrapper[4990]: I1205 01:33:05.228408 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-84997d8dc-hzdlp"] Dec 05 01:33:05 crc kubenswrapper[4990]: I1205 01:33:05.896421 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:33:05 crc kubenswrapper[4990]: I1205 01:33:05.896854 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2cc540f0-e077-4687-b2e4-5a0c268ce4a6" containerName="ceilometer-central-agent" containerID="cri-o://9d9ea3a76edd8ead779822b7704b7f9cabbac47b1618f99c4eff2d376685b24e" gracePeriod=30 Dec 05 01:33:05 crc kubenswrapper[4990]: I1205 01:33:05.897009 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2cc540f0-e077-4687-b2e4-5a0c268ce4a6" containerName="proxy-httpd" containerID="cri-o://4a326ed1d2d8e27b9394ae700bdaac30f713fdf0b35da3963333ed0f68b5b7a8" gracePeriod=30 Dec 05 01:33:05 crc kubenswrapper[4990]: I1205 01:33:05.897048 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2cc540f0-e077-4687-b2e4-5a0c268ce4a6" containerName="sg-core" containerID="cri-o://e7f54b1b5c504a300b7c0368d44657fadf84bbbfddf65d7871f2276a5ebed7c0" gracePeriod=30 Dec 05 01:33:05 crc kubenswrapper[4990]: I1205 01:33:05.897078 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2cc540f0-e077-4687-b2e4-5a0c268ce4a6" containerName="ceilometer-notification-agent" containerID="cri-o://4a6fc1996142df0e29ec6b56805c1ef0c2f2bfd30bfecb79b43c48a950a361b3" gracePeriod=30 Dec 05 01:33:05 crc kubenswrapper[4990]: I1205 01:33:05.904169 4990 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="2cc540f0-e077-4687-b2e4-5a0c268ce4a6" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.156:3000/\": EOF" Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.251285 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.251821 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e6fb5b68-4f83-45ef-986e-a527b3ebca9e" containerName="glance-log" containerID="cri-o://a39aa18dc38c07e356abc2caf86cec5d1844dd50b2ac31cb5b8de89bcdf0e096" gracePeriod=30 Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.252214 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e6fb5b68-4f83-45ef-986e-a527b3ebca9e" containerName="glance-httpd" containerID="cri-o://544abadeec7efdd87d142f532801c8d40534098d72192ae34756e3fed5d92963" gracePeriod=30 Dec 05 01:33:06 crc kubenswrapper[4990]: E1205 01:33:06.331996 4990 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6fb5b68_4f83_45ef_986e_a527b3ebca9e.slice/crio-a39aa18dc38c07e356abc2caf86cec5d1844dd50b2ac31cb5b8de89bcdf0e096.scope\": RecentStats: unable to find data in memory cache]" Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.717251 4990 generic.go:334] "Generic (PLEG): container finished" podID="e6fb5b68-4f83-45ef-986e-a527b3ebca9e" containerID="a39aa18dc38c07e356abc2caf86cec5d1844dd50b2ac31cb5b8de89bcdf0e096" exitCode=143 Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.717326 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e6fb5b68-4f83-45ef-986e-a527b3ebca9e","Type":"ContainerDied","Data":"a39aa18dc38c07e356abc2caf86cec5d1844dd50b2ac31cb5b8de89bcdf0e096"} Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.721697 4990 generic.go:334] "Generic (PLEG): container finished" podID="2cc540f0-e077-4687-b2e4-5a0c268ce4a6" containerID="4a326ed1d2d8e27b9394ae700bdaac30f713fdf0b35da3963333ed0f68b5b7a8" exitCode=0 Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.721724 4990 generic.go:334] "Generic (PLEG): container finished" podID="2cc540f0-e077-4687-b2e4-5a0c268ce4a6" containerID="e7f54b1b5c504a300b7c0368d44657fadf84bbbfddf65d7871f2276a5ebed7c0" exitCode=2 Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.721737 4990 generic.go:334] "Generic (PLEG): container finished" podID="2cc540f0-e077-4687-b2e4-5a0c268ce4a6" containerID="9d9ea3a76edd8ead779822b7704b7f9cabbac47b1618f99c4eff2d376685b24e" exitCode=0 Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.721755 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2cc540f0-e077-4687-b2e4-5a0c268ce4a6","Type":"ContainerDied","Data":"4a326ed1d2d8e27b9394ae700bdaac30f713fdf0b35da3963333ed0f68b5b7a8"} Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.721788 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2cc540f0-e077-4687-b2e4-5a0c268ce4a6","Type":"ContainerDied","Data":"e7f54b1b5c504a300b7c0368d44657fadf84bbbfddf65d7871f2276a5ebed7c0"} Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.721801 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2cc540f0-e077-4687-b2e4-5a0c268ce4a6","Type":"ContainerDied","Data":"9d9ea3a76edd8ead779822b7704b7f9cabbac47b1618f99c4eff2d376685b24e"} Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.754113 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-d2rxf"] Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.755562 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-d2rxf" Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.764554 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-d2rxf"] Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.788315 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d400066-a9cb-4663-a398-9b2dfdeba85e-operator-scripts\") pod \"nova-api-db-create-d2rxf\" (UID: \"2d400066-a9cb-4663-a398-9b2dfdeba85e\") " pod="openstack/nova-api-db-create-d2rxf" Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.788385 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l67pz\" (UniqueName: \"kubernetes.io/projected/2d400066-a9cb-4663-a398-9b2dfdeba85e-kube-api-access-l67pz\") pod \"nova-api-db-create-d2rxf\" (UID: \"2d400066-a9cb-4663-a398-9b2dfdeba85e\") " pod="openstack/nova-api-db-create-d2rxf" Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.850004 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-4hxkr"] Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.874591 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-4hxkr"] Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.874679 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4hxkr" Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.877353 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-ea53-account-create-update-zvnfg"] Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.878743 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ea53-account-create-update-zvnfg" Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.882362 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.885021 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ea53-account-create-update-zvnfg"] Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.891030 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d400066-a9cb-4663-a398-9b2dfdeba85e-operator-scripts\") pod \"nova-api-db-create-d2rxf\" (UID: \"2d400066-a9cb-4663-a398-9b2dfdeba85e\") " pod="openstack/nova-api-db-create-d2rxf" Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.891240 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l67pz\" (UniqueName: \"kubernetes.io/projected/2d400066-a9cb-4663-a398-9b2dfdeba85e-kube-api-access-l67pz\") pod \"nova-api-db-create-d2rxf\" (UID: \"2d400066-a9cb-4663-a398-9b2dfdeba85e\") " pod="openstack/nova-api-db-create-d2rxf" Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.891923 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d400066-a9cb-4663-a398-9b2dfdeba85e-operator-scripts\") pod \"nova-api-db-create-d2rxf\" (UID: \"2d400066-a9cb-4663-a398-9b2dfdeba85e\") " pod="openstack/nova-api-db-create-d2rxf" Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.923270 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l67pz\" (UniqueName: \"kubernetes.io/projected/2d400066-a9cb-4663-a398-9b2dfdeba85e-kube-api-access-l67pz\") pod \"nova-api-db-create-d2rxf\" (UID: \"2d400066-a9cb-4663-a398-9b2dfdeba85e\") " pod="openstack/nova-api-db-create-d2rxf" Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.953252 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-9fvnr"] Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.954614 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9fvnr" Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.975896 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-9fvnr"] Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.993320 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fck4\" (UniqueName: \"kubernetes.io/projected/61f41429-2e5c-4fa2-adca-fc89ad4f4175-kube-api-access-2fck4\") pod \"nova-api-ea53-account-create-update-zvnfg\" (UID: \"61f41429-2e5c-4fa2-adca-fc89ad4f4175\") " pod="openstack/nova-api-ea53-account-create-update-zvnfg" Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.993572 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72c761da-2168-4966-b204-cddef6555a72-operator-scripts\") pod \"nova-cell0-db-create-4hxkr\" (UID: \"72c761da-2168-4966-b204-cddef6555a72\") " pod="openstack/nova-cell0-db-create-4hxkr" Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.993616 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvr47\" (UniqueName: \"kubernetes.io/projected/fc1f8c91-37fa-4816-9340-f6345f60a6cf-kube-api-access-rvr47\") pod \"nova-cell1-db-create-9fvnr\" (UID: \"fc1f8c91-37fa-4816-9340-f6345f60a6cf\") " pod="openstack/nova-cell1-db-create-9fvnr" Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.993803 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc1f8c91-37fa-4816-9340-f6345f60a6cf-operator-scripts\") pod \"nova-cell1-db-create-9fvnr\" (UID: \"fc1f8c91-37fa-4816-9340-f6345f60a6cf\") " pod="openstack/nova-cell1-db-create-9fvnr" Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.993869 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lchj8\" (UniqueName: \"kubernetes.io/projected/72c761da-2168-4966-b204-cddef6555a72-kube-api-access-lchj8\") pod \"nova-cell0-db-create-4hxkr\" (UID: \"72c761da-2168-4966-b204-cddef6555a72\") " pod="openstack/nova-cell0-db-create-4hxkr" Dec 05 01:33:06 crc kubenswrapper[4990]: I1205 01:33:06.993949 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61f41429-2e5c-4fa2-adca-fc89ad4f4175-operator-scripts\") pod \"nova-api-ea53-account-create-update-zvnfg\" (UID: \"61f41429-2e5c-4fa2-adca-fc89ad4f4175\") " pod="openstack/nova-api-ea53-account-create-update-zvnfg" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.062186 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-5b09-account-create-update-prgtc"] Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.063411 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5b09-account-create-update-prgtc" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.065084 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.073628 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-d2rxf" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.079679 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5b09-account-create-update-prgtc"] Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.097618 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lchj8\" (UniqueName: \"kubernetes.io/projected/72c761da-2168-4966-b204-cddef6555a72-kube-api-access-lchj8\") pod \"nova-cell0-db-create-4hxkr\" (UID: \"72c761da-2168-4966-b204-cddef6555a72\") " pod="openstack/nova-cell0-db-create-4hxkr" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.097656 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61f41429-2e5c-4fa2-adca-fc89ad4f4175-operator-scripts\") pod \"nova-api-ea53-account-create-update-zvnfg\" (UID: \"61f41429-2e5c-4fa2-adca-fc89ad4f4175\") " pod="openstack/nova-api-ea53-account-create-update-zvnfg" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.097700 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fck4\" (UniqueName: \"kubernetes.io/projected/61f41429-2e5c-4fa2-adca-fc89ad4f4175-kube-api-access-2fck4\") pod \"nova-api-ea53-account-create-update-zvnfg\" (UID: \"61f41429-2e5c-4fa2-adca-fc89ad4f4175\") " pod="openstack/nova-api-ea53-account-create-update-zvnfg" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.097759 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95552cfe-d576-4654-af71-fcd3c3c983ab-operator-scripts\") pod \"nova-cell0-5b09-account-create-update-prgtc\" (UID: \"95552cfe-d576-4654-af71-fcd3c3c983ab\") " pod="openstack/nova-cell0-5b09-account-create-update-prgtc" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.097826 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z92h6\" (UniqueName: \"kubernetes.io/projected/95552cfe-d576-4654-af71-fcd3c3c983ab-kube-api-access-z92h6\") pod \"nova-cell0-5b09-account-create-update-prgtc\" (UID: \"95552cfe-d576-4654-af71-fcd3c3c983ab\") " pod="openstack/nova-cell0-5b09-account-create-update-prgtc" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.097876 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72c761da-2168-4966-b204-cddef6555a72-operator-scripts\") pod \"nova-cell0-db-create-4hxkr\" (UID: \"72c761da-2168-4966-b204-cddef6555a72\") " pod="openstack/nova-cell0-db-create-4hxkr" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.097901 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvr47\" (UniqueName: \"kubernetes.io/projected/fc1f8c91-37fa-4816-9340-f6345f60a6cf-kube-api-access-rvr47\") pod \"nova-cell1-db-create-9fvnr\" (UID: \"fc1f8c91-37fa-4816-9340-f6345f60a6cf\") " pod="openstack/nova-cell1-db-create-9fvnr" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.098006 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc1f8c91-37fa-4816-9340-f6345f60a6cf-operator-scripts\") pod \"nova-cell1-db-create-9fvnr\" (UID: \"fc1f8c91-37fa-4816-9340-f6345f60a6cf\") " pod="openstack/nova-cell1-db-create-9fvnr" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.098451 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61f41429-2e5c-4fa2-adca-fc89ad4f4175-operator-scripts\") pod \"nova-api-ea53-account-create-update-zvnfg\" (UID: \"61f41429-2e5c-4fa2-adca-fc89ad4f4175\") " pod="openstack/nova-api-ea53-account-create-update-zvnfg" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.098759 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc1f8c91-37fa-4816-9340-f6345f60a6cf-operator-scripts\") pod \"nova-cell1-db-create-9fvnr\" (UID: \"fc1f8c91-37fa-4816-9340-f6345f60a6cf\") " pod="openstack/nova-cell1-db-create-9fvnr" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.098758 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72c761da-2168-4966-b204-cddef6555a72-operator-scripts\") pod \"nova-cell0-db-create-4hxkr\" (UID: \"72c761da-2168-4966-b204-cddef6555a72\") " pod="openstack/nova-cell0-db-create-4hxkr" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.114035 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvr47\" (UniqueName: \"kubernetes.io/projected/fc1f8c91-37fa-4816-9340-f6345f60a6cf-kube-api-access-rvr47\") pod \"nova-cell1-db-create-9fvnr\" (UID: \"fc1f8c91-37fa-4816-9340-f6345f60a6cf\") " pod="openstack/nova-cell1-db-create-9fvnr" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.114463 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lchj8\" (UniqueName: \"kubernetes.io/projected/72c761da-2168-4966-b204-cddef6555a72-kube-api-access-lchj8\") pod \"nova-cell0-db-create-4hxkr\" (UID: \"72c761da-2168-4966-b204-cddef6555a72\") " pod="openstack/nova-cell0-db-create-4hxkr" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.130848 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fck4\" (UniqueName: \"kubernetes.io/projected/61f41429-2e5c-4fa2-adca-fc89ad4f4175-kube-api-access-2fck4\") pod \"nova-api-ea53-account-create-update-zvnfg\" (UID: \"61f41429-2e5c-4fa2-adca-fc89ad4f4175\") " pod="openstack/nova-api-ea53-account-create-update-zvnfg" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.194371 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4hxkr" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.199561 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95552cfe-d576-4654-af71-fcd3c3c983ab-operator-scripts\") pod \"nova-cell0-5b09-account-create-update-prgtc\" (UID: \"95552cfe-d576-4654-af71-fcd3c3c983ab\") " pod="openstack/nova-cell0-5b09-account-create-update-prgtc" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.199634 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z92h6\" (UniqueName: \"kubernetes.io/projected/95552cfe-d576-4654-af71-fcd3c3c983ab-kube-api-access-z92h6\") pod \"nova-cell0-5b09-account-create-update-prgtc\" (UID: \"95552cfe-d576-4654-af71-fcd3c3c983ab\") " pod="openstack/nova-cell0-5b09-account-create-update-prgtc" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.200743 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95552cfe-d576-4654-af71-fcd3c3c983ab-operator-scripts\") pod \"nova-cell0-5b09-account-create-update-prgtc\" (UID: \"95552cfe-d576-4654-af71-fcd3c3c983ab\") " pod="openstack/nova-cell0-5b09-account-create-update-prgtc" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.205346 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ea53-account-create-update-zvnfg" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.219634 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z92h6\" (UniqueName: \"kubernetes.io/projected/95552cfe-d576-4654-af71-fcd3c3c983ab-kube-api-access-z92h6\") pod \"nova-cell0-5b09-account-create-update-prgtc\" (UID: \"95552cfe-d576-4654-af71-fcd3c3c983ab\") " pod="openstack/nova-cell0-5b09-account-create-update-prgtc" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.257246 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.257494 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f7b19a4b-c55b-4845-ae65-09442bb0a29f" containerName="glance-log" containerID="cri-o://c2f3a7ecfe85bc53d25f24925ad8fdbd1e77d2d5dbc23b3711c9b89b49e3ba37" gracePeriod=30 Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.257616 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f7b19a4b-c55b-4845-ae65-09442bb0a29f" containerName="glance-httpd" containerID="cri-o://bf2fdceb0b6d06482f716e5de13b7c5667b3c943dfb1c3e9136a043bd5ec7bf3" gracePeriod=30 Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.271874 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-e6ca-account-create-update-wzznq"] Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.272933 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e6ca-account-create-update-wzznq" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.273176 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9fvnr" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.277746 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.284268 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-e6ca-account-create-update-wzznq"] Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.381881 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5b09-account-create-update-prgtc" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.404490 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62qgv\" (UniqueName: \"kubernetes.io/projected/158b83e3-9326-447b-b100-3fc9f25383b2-kube-api-access-62qgv\") pod \"nova-cell1-e6ca-account-create-update-wzznq\" (UID: \"158b83e3-9326-447b-b100-3fc9f25383b2\") " pod="openstack/nova-cell1-e6ca-account-create-update-wzznq" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.404536 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/158b83e3-9326-447b-b100-3fc9f25383b2-operator-scripts\") pod \"nova-cell1-e6ca-account-create-update-wzznq\" (UID: \"158b83e3-9326-447b-b100-3fc9f25383b2\") " pod="openstack/nova-cell1-e6ca-account-create-update-wzznq" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.506604 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62qgv\" (UniqueName: \"kubernetes.io/projected/158b83e3-9326-447b-b100-3fc9f25383b2-kube-api-access-62qgv\") pod \"nova-cell1-e6ca-account-create-update-wzznq\" (UID: \"158b83e3-9326-447b-b100-3fc9f25383b2\") " pod="openstack/nova-cell1-e6ca-account-create-update-wzznq" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.506660 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/158b83e3-9326-447b-b100-3fc9f25383b2-operator-scripts\") pod \"nova-cell1-e6ca-account-create-update-wzznq\" (UID: \"158b83e3-9326-447b-b100-3fc9f25383b2\") " pod="openstack/nova-cell1-e6ca-account-create-update-wzznq" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.507427 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/158b83e3-9326-447b-b100-3fc9f25383b2-operator-scripts\") pod \"nova-cell1-e6ca-account-create-update-wzznq\" (UID: \"158b83e3-9326-447b-b100-3fc9f25383b2\") " pod="openstack/nova-cell1-e6ca-account-create-update-wzznq" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.522460 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62qgv\" (UniqueName: \"kubernetes.io/projected/158b83e3-9326-447b-b100-3fc9f25383b2-kube-api-access-62qgv\") pod \"nova-cell1-e6ca-account-create-update-wzznq\" (UID: \"158b83e3-9326-447b-b100-3fc9f25383b2\") " pod="openstack/nova-cell1-e6ca-account-create-update-wzznq" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.612499 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e6ca-account-create-update-wzznq" Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.737169 4990 generic.go:334] "Generic (PLEG): container finished" podID="2cc540f0-e077-4687-b2e4-5a0c268ce4a6" containerID="4a6fc1996142df0e29ec6b56805c1ef0c2f2bfd30bfecb79b43c48a950a361b3" exitCode=0 Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.737211 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2cc540f0-e077-4687-b2e4-5a0c268ce4a6","Type":"ContainerDied","Data":"4a6fc1996142df0e29ec6b56805c1ef0c2f2bfd30bfecb79b43c48a950a361b3"} Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.738664 4990 generic.go:334] "Generic (PLEG): container finished" podID="f7b19a4b-c55b-4845-ae65-09442bb0a29f" containerID="c2f3a7ecfe85bc53d25f24925ad8fdbd1e77d2d5dbc23b3711c9b89b49e3ba37" exitCode=143 Dec 05 01:33:07 crc kubenswrapper[4990]: I1205 01:33:07.738682 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f7b19a4b-c55b-4845-ae65-09442bb0a29f","Type":"ContainerDied","Data":"c2f3a7ecfe85bc53d25f24925ad8fdbd1e77d2d5dbc23b3711c9b89b49e3ba37"} Dec 05 01:33:09 crc kubenswrapper[4990]: I1205 01:33:09.171865 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 05 01:33:09 crc kubenswrapper[4990]: I1205 01:33:09.778818 4990 generic.go:334] "Generic (PLEG): container finished" podID="e6fb5b68-4f83-45ef-986e-a527b3ebca9e" containerID="544abadeec7efdd87d142f532801c8d40534098d72192ae34756e3fed5d92963" exitCode=0 Dec 05 01:33:09 crc kubenswrapper[4990]: I1205 01:33:09.779223 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e6fb5b68-4f83-45ef-986e-a527b3ebca9e","Type":"ContainerDied","Data":"544abadeec7efdd87d142f532801c8d40534098d72192ae34756e3fed5d92963"} Dec 05 01:33:10 crc kubenswrapper[4990]: W1205 01:33:10.027397 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb029546_9d20_445a_9926_2a43c235a755.slice/crio-bb3763b8b6534931b38313a1a9508913b7ac093a9fd458f9f2a28d6377938a2a WatchSource:0}: Error finding container bb3763b8b6534931b38313a1a9508913b7ac093a9fd458f9f2a28d6377938a2a: Status 404 returned error can't find the container with id bb3763b8b6534931b38313a1a9508913b7ac093a9fd458f9f2a28d6377938a2a Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.411375 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.463196 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8bl5\" (UniqueName: \"kubernetes.io/projected/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-kube-api-access-t8bl5\") pod \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\" (UID: \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\") " Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.463257 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-combined-ca-bundle\") pod \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\" (UID: \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\") " Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.463297 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-log-httpd\") pod \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\" (UID: \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\") " Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.463439 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-run-httpd\") pod \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\" (UID: \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\") " Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.463517 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-config-data\") pod \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\" (UID: \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\") " Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.463550 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-sg-core-conf-yaml\") pod \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\" (UID: \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\") " Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.463574 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-scripts\") pod \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\" (UID: \"2cc540f0-e077-4687-b2e4-5a0c268ce4a6\") " Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.464303 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2cc540f0-e077-4687-b2e4-5a0c268ce4a6" (UID: "2cc540f0-e077-4687-b2e4-5a0c268ce4a6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.464553 4990 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.467617 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2cc540f0-e077-4687-b2e4-5a0c268ce4a6" (UID: "2cc540f0-e077-4687-b2e4-5a0c268ce4a6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.474245 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-kube-api-access-t8bl5" (OuterVolumeSpecName: "kube-api-access-t8bl5") pod "2cc540f0-e077-4687-b2e4-5a0c268ce4a6" (UID: "2cc540f0-e077-4687-b2e4-5a0c268ce4a6"). InnerVolumeSpecName "kube-api-access-t8bl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.474393 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-scripts" (OuterVolumeSpecName: "scripts") pod "2cc540f0-e077-4687-b2e4-5a0c268ce4a6" (UID: "2cc540f0-e077-4687-b2e4-5a0c268ce4a6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.547705 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2cc540f0-e077-4687-b2e4-5a0c268ce4a6" (UID: "2cc540f0-e077-4687-b2e4-5a0c268ce4a6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.565728 4990 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.566015 4990 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.566025 4990 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.566035 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8bl5\" (UniqueName: \"kubernetes.io/projected/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-kube-api-access-t8bl5\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.579421 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2cc540f0-e077-4687-b2e4-5a0c268ce4a6" (UID: "2cc540f0-e077-4687-b2e4-5a0c268ce4a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.603784 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.646614 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-config-data" (OuterVolumeSpecName: "config-data") pod "2cc540f0-e077-4687-b2e4-5a0c268ce4a6" (UID: "2cc540f0-e077-4687-b2e4-5a0c268ce4a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.667199 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-public-tls-certs\") pod \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\" (UID: \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\") " Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.667737 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-scripts\") pod \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\" (UID: \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\") " Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.667898 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-config-data\") pod \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\" (UID: \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\") " Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.667966 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-combined-ca-bundle\") pod \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\" (UID: \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\") " Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.668165 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-logs\") pod \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\" (UID: \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\") " Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.668253 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lvn8\" (UniqueName: \"kubernetes.io/projected/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-kube-api-access-2lvn8\") pod \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\" (UID: \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\") " Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.668355 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-httpd-run\") pod \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\" (UID: \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\") " Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.668415 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\" (UID: \"e6fb5b68-4f83-45ef-986e-a527b3ebca9e\") " Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.668899 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.668983 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc540f0-e077-4687-b2e4-5a0c268ce4a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.669014 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-logs" (OuterVolumeSpecName: "logs") pod "e6fb5b68-4f83-45ef-986e-a527b3ebca9e" (UID: "e6fb5b68-4f83-45ef-986e-a527b3ebca9e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.669257 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e6fb5b68-4f83-45ef-986e-a527b3ebca9e" (UID: "e6fb5b68-4f83-45ef-986e-a527b3ebca9e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.674737 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-kube-api-access-2lvn8" (OuterVolumeSpecName: "kube-api-access-2lvn8") pod "e6fb5b68-4f83-45ef-986e-a527b3ebca9e" (UID: "e6fb5b68-4f83-45ef-986e-a527b3ebca9e"). InnerVolumeSpecName "kube-api-access-2lvn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.675700 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-scripts" (OuterVolumeSpecName: "scripts") pod "e6fb5b68-4f83-45ef-986e-a527b3ebca9e" (UID: "e6fb5b68-4f83-45ef-986e-a527b3ebca9e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.678917 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "e6fb5b68-4f83-45ef-986e-a527b3ebca9e" (UID: "e6fb5b68-4f83-45ef-986e-a527b3ebca9e"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.705826 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6fb5b68-4f83-45ef-986e-a527b3ebca9e" (UID: "e6fb5b68-4f83-45ef-986e-a527b3ebca9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.722030 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-d2rxf"] Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.728050 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e6fb5b68-4f83-45ef-986e-a527b3ebca9e" (UID: "e6fb5b68-4f83-45ef-986e-a527b3ebca9e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.733772 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5b09-account-create-update-prgtc"] Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.773332 4990 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-logs\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.773384 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lvn8\" (UniqueName: \"kubernetes.io/projected/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-kube-api-access-2lvn8\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.773396 4990 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.773423 4990 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.773434 4990 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.773442 4990 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.773473 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.778720 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-config-data" (OuterVolumeSpecName: "config-data") pod "e6fb5b68-4f83-45ef-986e-a527b3ebca9e" (UID: "e6fb5b68-4f83-45ef-986e-a527b3ebca9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.797476 4990 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.811576 4990 generic.go:334] "Generic (PLEG): container finished" podID="f7b19a4b-c55b-4845-ae65-09442bb0a29f" containerID="bf2fdceb0b6d06482f716e5de13b7c5667b3c943dfb1c3e9136a043bd5ec7bf3" exitCode=0 Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.811652 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f7b19a4b-c55b-4845-ae65-09442bb0a29f","Type":"ContainerDied","Data":"bf2fdceb0b6d06482f716e5de13b7c5667b3c943dfb1c3e9136a043bd5ec7bf3"} Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.812983 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-d2rxf" event={"ID":"2d400066-a9cb-4663-a398-9b2dfdeba85e","Type":"ContainerStarted","Data":"cc871341c1eac9bd976b052a2dcc82e2f69f2b401ea6d4c28a713ec2734deeb1"} Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.815002 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-84997d8dc-hzdlp" event={"ID":"bb029546-9d20-445a-9926-2a43c235a755","Type":"ContainerStarted","Data":"16fbd3724b89f938c175ee9ce25a3436b47a2fa6ff90036a1c918e6369ba703e"} Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.815033 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-84997d8dc-hzdlp" event={"ID":"bb029546-9d20-445a-9926-2a43c235a755","Type":"ContainerStarted","Data":"7a443fba291ecdcac0e6448acb29bbf33b4870fc7e1f4e55df79dc991c5a9578"} Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.815044 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-84997d8dc-hzdlp" event={"ID":"bb029546-9d20-445a-9926-2a43c235a755","Type":"ContainerStarted","Data":"bb3763b8b6534931b38313a1a9508913b7ac093a9fd458f9f2a28d6377938a2a"} Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.815088 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-84997d8dc-hzdlp" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.815109 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-84997d8dc-hzdlp" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.839539 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2cc540f0-e077-4687-b2e4-5a0c268ce4a6","Type":"ContainerDied","Data":"f3169f2e01edf23db56218486e86c0abbc6823bfc1ce477a4a7e3645e10a34ba"} Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.839831 4990 scope.go:117] "RemoveContainer" containerID="4a326ed1d2d8e27b9394ae700bdaac30f713fdf0b35da3963333ed0f68b5b7a8" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.840622 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.845135 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5b09-account-create-update-prgtc" event={"ID":"95552cfe-d576-4654-af71-fcd3c3c983ab","Type":"ContainerStarted","Data":"c321d1f54858e1de25ff959c7711751772d460e8930b6b0f73599b1026723907"} Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.850466 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f19ad196-b05b-4ade-ba2b-3b532d447f8e","Type":"ContainerStarted","Data":"125c5c338aa8e9845fa2e8e4ed1dc5dbc6d155571a90f33a2bce9b3578bb0527"} Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.852662 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e6fb5b68-4f83-45ef-986e-a527b3ebca9e","Type":"ContainerDied","Data":"2cdfcccbf806e8c8474a732a23c94d7a00225812631ac17f40244e3366b7efa1"} Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.852754 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.855059 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-84997d8dc-hzdlp" podStartSLOduration=6.855042561 podStartE2EDuration="6.855042561s" podCreationTimestamp="2025-12-05 01:33:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:33:10.837827333 +0000 UTC m=+1489.214042694" watchObservedRunningTime="2025-12-05 01:33:10.855042561 +0000 UTC m=+1489.231257922" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.868393 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.786584771 podStartE2EDuration="10.868374s" podCreationTimestamp="2025-12-05 01:33:00 +0000 UTC" firstStartedPulling="2025-12-05 01:33:01.11308741 +0000 UTC m=+1479.489302771" lastFinishedPulling="2025-12-05 01:33:10.194876619 +0000 UTC m=+1488.571092000" observedRunningTime="2025-12-05 01:33:10.862864203 +0000 UTC m=+1489.239079564" watchObservedRunningTime="2025-12-05 01:33:10.868374 +0000 UTC m=+1489.244589351" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.874674 4990 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.874695 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6fb5b68-4f83-45ef-986e-a527b3ebca9e-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.892837 4990 scope.go:117] "RemoveContainer" containerID="e7f54b1b5c504a300b7c0368d44657fadf84bbbfddf65d7871f2276a5ebed7c0" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.902122 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.919826 4990 scope.go:117] "RemoveContainer" containerID="4a6fc1996142df0e29ec6b56805c1ef0c2f2bfd30bfecb79b43c48a950a361b3" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.920056 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.934052 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.942669 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.951284 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:33:10 crc kubenswrapper[4990]: E1205 01:33:10.951708 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6fb5b68-4f83-45ef-986e-a527b3ebca9e" containerName="glance-log" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.951731 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6fb5b68-4f83-45ef-986e-a527b3ebca9e" containerName="glance-log" Dec 05 01:33:10 crc kubenswrapper[4990]: E1205 01:33:10.951749 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cc540f0-e077-4687-b2e4-5a0c268ce4a6" containerName="sg-core" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.951757 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cc540f0-e077-4687-b2e4-5a0c268ce4a6" containerName="sg-core" Dec 05 01:33:10 crc kubenswrapper[4990]: E1205 01:33:10.951774 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6fb5b68-4f83-45ef-986e-a527b3ebca9e" containerName="glance-httpd" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.951780 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6fb5b68-4f83-45ef-986e-a527b3ebca9e" containerName="glance-httpd" Dec 05 01:33:10 crc kubenswrapper[4990]: E1205 01:33:10.951795 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cc540f0-e077-4687-b2e4-5a0c268ce4a6" containerName="ceilometer-central-agent" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.951801 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cc540f0-e077-4687-b2e4-5a0c268ce4a6" containerName="ceilometer-central-agent" Dec 05 01:33:10 crc kubenswrapper[4990]: E1205 01:33:10.951816 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cc540f0-e077-4687-b2e4-5a0c268ce4a6" containerName="ceilometer-notification-agent" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.951823 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cc540f0-e077-4687-b2e4-5a0c268ce4a6" containerName="ceilometer-notification-agent" Dec 05 01:33:10 crc kubenswrapper[4990]: E1205 01:33:10.951832 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cc540f0-e077-4687-b2e4-5a0c268ce4a6" containerName="proxy-httpd" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.951838 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cc540f0-e077-4687-b2e4-5a0c268ce4a6" containerName="proxy-httpd" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.952030 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cc540f0-e077-4687-b2e4-5a0c268ce4a6" containerName="ceilometer-notification-agent" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.952049 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cc540f0-e077-4687-b2e4-5a0c268ce4a6" containerName="ceilometer-central-agent" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.952061 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cc540f0-e077-4687-b2e4-5a0c268ce4a6" containerName="sg-core" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.952070 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6fb5b68-4f83-45ef-986e-a527b3ebca9e" containerName="glance-httpd" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.952080 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cc540f0-e077-4687-b2e4-5a0c268ce4a6" containerName="proxy-httpd" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.952092 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6fb5b68-4f83-45ef-986e-a527b3ebca9e" containerName="glance-log" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.954225 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.957521 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.958517 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.961202 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.967726 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.974841 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.978469 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.978674 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.985240 4990 scope.go:117] "RemoveContainer" containerID="9d9ea3a76edd8ead779822b7704b7f9cabbac47b1618f99c4eff2d376685b24e" Dec 05 01:33:10 crc kubenswrapper[4990]: I1205 01:33:10.988538 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.081992 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\") " pod="openstack/glance-default-external-api-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.082038 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\") " pod="openstack/glance-default-external-api-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.082064 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34c74f12-e4d3-44c7-87bb-79759b368059-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"34c74f12-e4d3-44c7-87bb-79759b368059\") " pod="openstack/ceilometer-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.082089 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34c74f12-e4d3-44c7-87bb-79759b368059-run-httpd\") pod \"ceilometer-0\" (UID: \"34c74f12-e4d3-44c7-87bb-79759b368059\") " pod="openstack/ceilometer-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.082117 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34c74f12-e4d3-44c7-87bb-79759b368059-scripts\") pod \"ceilometer-0\" (UID: \"34c74f12-e4d3-44c7-87bb-79759b368059\") " pod="openstack/ceilometer-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.082138 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vmpv\" (UniqueName: \"kubernetes.io/projected/34c74f12-e4d3-44c7-87bb-79759b368059-kube-api-access-6vmpv\") pod \"ceilometer-0\" (UID: \"34c74f12-e4d3-44c7-87bb-79759b368059\") " pod="openstack/ceilometer-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.082168 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-logs\") pod \"glance-default-external-api-0\" (UID: \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\") " pod="openstack/glance-default-external-api-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.082213 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34c74f12-e4d3-44c7-87bb-79759b368059-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"34c74f12-e4d3-44c7-87bb-79759b368059\") " pod="openstack/ceilometer-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.082231 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34c74f12-e4d3-44c7-87bb-79759b368059-config-data\") pod \"ceilometer-0\" (UID: \"34c74f12-e4d3-44c7-87bb-79759b368059\") " pod="openstack/ceilometer-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.082250 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-scripts\") pod \"glance-default-external-api-0\" (UID: \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\") " pod="openstack/glance-default-external-api-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.082281 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-config-data\") pod \"glance-default-external-api-0\" (UID: \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\") " pod="openstack/glance-default-external-api-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.082299 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\") " pod="openstack/glance-default-external-api-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.082318 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34c74f12-e4d3-44c7-87bb-79759b368059-log-httpd\") pod \"ceilometer-0\" (UID: \"34c74f12-e4d3-44c7-87bb-79759b368059\") " pod="openstack/ceilometer-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.082334 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqrfp\" (UniqueName: \"kubernetes.io/projected/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-kube-api-access-jqrfp\") pod \"glance-default-external-api-0\" (UID: \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\") " pod="openstack/glance-default-external-api-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.082360 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\") " pod="openstack/glance-default-external-api-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.102959 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-e6ca-account-create-update-wzznq"] Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.112619 4990 scope.go:117] "RemoveContainer" containerID="544abadeec7efdd87d142f532801c8d40534098d72192ae34756e3fed5d92963" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.162281 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ea53-account-create-update-zvnfg"] Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.171367 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-9fvnr"] Dec 05 01:33:11 crc kubenswrapper[4990]: W1205 01:33:11.174392 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72c761da_2168_4966_b204_cddef6555a72.slice/crio-4f5869215743e08399ab86f943452ca5f73c9c321ebae28414d98d3d2bee29fa WatchSource:0}: Error finding container 4f5869215743e08399ab86f943452ca5f73c9c321ebae28414d98d3d2bee29fa: Status 404 returned error can't find the container with id 4f5869215743e08399ab86f943452ca5f73c9c321ebae28414d98d3d2bee29fa Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.178962 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-4hxkr"] Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.183603 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-config-data\") pod \"glance-default-external-api-0\" (UID: \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\") " pod="openstack/glance-default-external-api-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.183640 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\") " pod="openstack/glance-default-external-api-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.183665 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34c74f12-e4d3-44c7-87bb-79759b368059-log-httpd\") pod \"ceilometer-0\" (UID: \"34c74f12-e4d3-44c7-87bb-79759b368059\") " pod="openstack/ceilometer-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.183683 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqrfp\" (UniqueName: \"kubernetes.io/projected/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-kube-api-access-jqrfp\") pod \"glance-default-external-api-0\" (UID: \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\") " pod="openstack/glance-default-external-api-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.183718 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\") " pod="openstack/glance-default-external-api-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.183753 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\") " pod="openstack/glance-default-external-api-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.183773 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\") " pod="openstack/glance-default-external-api-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.183793 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34c74f12-e4d3-44c7-87bb-79759b368059-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"34c74f12-e4d3-44c7-87bb-79759b368059\") " pod="openstack/ceilometer-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.183830 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34c74f12-e4d3-44c7-87bb-79759b368059-run-httpd\") pod \"ceilometer-0\" (UID: \"34c74f12-e4d3-44c7-87bb-79759b368059\") " pod="openstack/ceilometer-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.183853 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34c74f12-e4d3-44c7-87bb-79759b368059-scripts\") pod \"ceilometer-0\" (UID: \"34c74f12-e4d3-44c7-87bb-79759b368059\") " pod="openstack/ceilometer-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.183872 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vmpv\" (UniqueName: \"kubernetes.io/projected/34c74f12-e4d3-44c7-87bb-79759b368059-kube-api-access-6vmpv\") pod \"ceilometer-0\" (UID: \"34c74f12-e4d3-44c7-87bb-79759b368059\") " pod="openstack/ceilometer-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.183893 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-logs\") pod \"glance-default-external-api-0\" (UID: \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\") " pod="openstack/glance-default-external-api-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.183954 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34c74f12-e4d3-44c7-87bb-79759b368059-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"34c74f12-e4d3-44c7-87bb-79759b368059\") " pod="openstack/ceilometer-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.183973 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34c74f12-e4d3-44c7-87bb-79759b368059-config-data\") pod \"ceilometer-0\" (UID: \"34c74f12-e4d3-44c7-87bb-79759b368059\") " pod="openstack/ceilometer-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.183991 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-scripts\") pod \"glance-default-external-api-0\" (UID: \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\") " pod="openstack/glance-default-external-api-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.186222 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34c74f12-e4d3-44c7-87bb-79759b368059-log-httpd\") pod \"ceilometer-0\" (UID: \"34c74f12-e4d3-44c7-87bb-79759b368059\") " pod="openstack/ceilometer-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.186629 4990 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.189937 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\") " pod="openstack/glance-default-external-api-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.194839 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34c74f12-e4d3-44c7-87bb-79759b368059-run-httpd\") pod \"ceilometer-0\" (UID: \"34c74f12-e4d3-44c7-87bb-79759b368059\") " pod="openstack/ceilometer-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.195150 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-logs\") pod \"glance-default-external-api-0\" (UID: \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\") " pod="openstack/glance-default-external-api-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.199869 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-config-data\") pod \"glance-default-external-api-0\" (UID: \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\") " pod="openstack/glance-default-external-api-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.201332 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34c74f12-e4d3-44c7-87bb-79759b368059-scripts\") pod \"ceilometer-0\" (UID: \"34c74f12-e4d3-44c7-87bb-79759b368059\") " pod="openstack/ceilometer-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.202020 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\") " pod="openstack/glance-default-external-api-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.202359 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34c74f12-e4d3-44c7-87bb-79759b368059-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"34c74f12-e4d3-44c7-87bb-79759b368059\") " pod="openstack/ceilometer-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.206131 4990 scope.go:117] "RemoveContainer" containerID="a39aa18dc38c07e356abc2caf86cec5d1844dd50b2ac31cb5b8de89bcdf0e096" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.210266 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\") " pod="openstack/glance-default-external-api-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.222638 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-scripts\") pod \"glance-default-external-api-0\" (UID: \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\") " pod="openstack/glance-default-external-api-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.224249 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34c74f12-e4d3-44c7-87bb-79759b368059-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"34c74f12-e4d3-44c7-87bb-79759b368059\") " pod="openstack/ceilometer-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.227928 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34c74f12-e4d3-44c7-87bb-79759b368059-config-data\") pod \"ceilometer-0\" (UID: \"34c74f12-e4d3-44c7-87bb-79759b368059\") " pod="openstack/ceilometer-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.233808 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqrfp\" (UniqueName: \"kubernetes.io/projected/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-kube-api-access-jqrfp\") pod \"glance-default-external-api-0\" (UID: \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\") " pod="openstack/glance-default-external-api-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.239766 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vmpv\" (UniqueName: \"kubernetes.io/projected/34c74f12-e4d3-44c7-87bb-79759b368059-kube-api-access-6vmpv\") pod \"ceilometer-0\" (UID: \"34c74f12-e4d3-44c7-87bb-79759b368059\") " pod="openstack/ceilometer-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.272000 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\") " pod="openstack/glance-default-external-api-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.293039 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.412961 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.450166 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.489890 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbzzv\" (UniqueName: \"kubernetes.io/projected/f7b19a4b-c55b-4845-ae65-09442bb0a29f-kube-api-access-bbzzv\") pod \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\" (UID: \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\") " Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.489991 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\" (UID: \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\") " Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.490040 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7b19a4b-c55b-4845-ae65-09442bb0a29f-config-data\") pod \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\" (UID: \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\") " Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.490087 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7b19a4b-c55b-4845-ae65-09442bb0a29f-internal-tls-certs\") pod \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\" (UID: \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\") " Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.490183 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7b19a4b-c55b-4845-ae65-09442bb0a29f-logs\") pod \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\" (UID: \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\") " Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.490246 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7b19a4b-c55b-4845-ae65-09442bb0a29f-httpd-run\") pod \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\" (UID: \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\") " Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.490264 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7b19a4b-c55b-4845-ae65-09442bb0a29f-scripts\") pod \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\" (UID: \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\") " Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.490289 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7b19a4b-c55b-4845-ae65-09442bb0a29f-combined-ca-bundle\") pod \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\" (UID: \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\") " Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.495238 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7b19a4b-c55b-4845-ae65-09442bb0a29f-logs" (OuterVolumeSpecName: "logs") pod "f7b19a4b-c55b-4845-ae65-09442bb0a29f" (UID: "f7b19a4b-c55b-4845-ae65-09442bb0a29f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.497999 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7b19a4b-c55b-4845-ae65-09442bb0a29f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f7b19a4b-c55b-4845-ae65-09442bb0a29f" (UID: "f7b19a4b-c55b-4845-ae65-09442bb0a29f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.502632 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "f7b19a4b-c55b-4845-ae65-09442bb0a29f" (UID: "f7b19a4b-c55b-4845-ae65-09442bb0a29f"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.508574 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7b19a4b-c55b-4845-ae65-09442bb0a29f-kube-api-access-bbzzv" (OuterVolumeSpecName: "kube-api-access-bbzzv") pod "f7b19a4b-c55b-4845-ae65-09442bb0a29f" (UID: "f7b19a4b-c55b-4845-ae65-09442bb0a29f"). InnerVolumeSpecName "kube-api-access-bbzzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.518575 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7b19a4b-c55b-4845-ae65-09442bb0a29f-scripts" (OuterVolumeSpecName: "scripts") pod "f7b19a4b-c55b-4845-ae65-09442bb0a29f" (UID: "f7b19a4b-c55b-4845-ae65-09442bb0a29f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.592454 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7b19a4b-c55b-4845-ae65-09442bb0a29f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7b19a4b-c55b-4845-ae65-09442bb0a29f" (UID: "f7b19a4b-c55b-4845-ae65-09442bb0a29f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.592565 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7b19a4b-c55b-4845-ae65-09442bb0a29f-combined-ca-bundle\") pod \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\" (UID: \"f7b19a4b-c55b-4845-ae65-09442bb0a29f\") " Dec 05 01:33:11 crc kubenswrapper[4990]: W1205 01:33:11.593127 4990 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/f7b19a4b-c55b-4845-ae65-09442bb0a29f/volumes/kubernetes.io~secret/combined-ca-bundle Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.593142 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7b19a4b-c55b-4845-ae65-09442bb0a29f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7b19a4b-c55b-4845-ae65-09442bb0a29f" (UID: "f7b19a4b-c55b-4845-ae65-09442bb0a29f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.593379 4990 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7b19a4b-c55b-4845-ae65-09442bb0a29f-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.593398 4990 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7b19a4b-c55b-4845-ae65-09442bb0a29f-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.593410 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7b19a4b-c55b-4845-ae65-09442bb0a29f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.593424 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbzzv\" (UniqueName: \"kubernetes.io/projected/f7b19a4b-c55b-4845-ae65-09442bb0a29f-kube-api-access-bbzzv\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.593455 4990 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.593501 4990 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7b19a4b-c55b-4845-ae65-09442bb0a29f-logs\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.661252 4990 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.670333 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7b19a4b-c55b-4845-ae65-09442bb0a29f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f7b19a4b-c55b-4845-ae65-09442bb0a29f" (UID: "f7b19a4b-c55b-4845-ae65-09442bb0a29f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.680572 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7b19a4b-c55b-4845-ae65-09442bb0a29f-config-data" (OuterVolumeSpecName: "config-data") pod "f7b19a4b-c55b-4845-ae65-09442bb0a29f" (UID: "f7b19a4b-c55b-4845-ae65-09442bb0a29f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.695003 4990 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7b19a4b-c55b-4845-ae65-09442bb0a29f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.695034 4990 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.695043 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7b19a4b-c55b-4845-ae65-09442bb0a29f-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.829232 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.888244 4990 generic.go:334] "Generic (PLEG): container finished" podID="95552cfe-d576-4654-af71-fcd3c3c983ab" containerID="6f3878a570f1cef13134c39eefe7ab105ee108418b035d9ba31b4cc89571e005" exitCode=0 Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.888313 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5b09-account-create-update-prgtc" event={"ID":"95552cfe-d576-4654-af71-fcd3c3c983ab","Type":"ContainerDied","Data":"6f3878a570f1cef13134c39eefe7ab105ee108418b035d9ba31b4cc89571e005"} Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.891906 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34c74f12-e4d3-44c7-87bb-79759b368059","Type":"ContainerStarted","Data":"731de6d85eb00d92a5908099905384e5728bdea5b9e737a7cc73317f67edfd0f"} Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.899394 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e6ca-account-create-update-wzznq" event={"ID":"158b83e3-9326-447b-b100-3fc9f25383b2","Type":"ContainerStarted","Data":"2c37fd99ba8ed3daa5da82a0119a54cd33959a588276037e6fc12867a26ee9ed"} Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.904044 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e6ca-account-create-update-wzznq" event={"ID":"158b83e3-9326-447b-b100-3fc9f25383b2","Type":"ContainerStarted","Data":"5b9c3f56c17078408b226417e1bf31822e57173e36c28cb09d72d428ce908367"} Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.917536 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f7b19a4b-c55b-4845-ae65-09442bb0a29f","Type":"ContainerDied","Data":"f92b3157954ae1e0901abae8f1ca06362c77e8a1b87c7be59008bbae1ff2ade3"} Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.917653 4990 scope.go:117] "RemoveContainer" containerID="bf2fdceb0b6d06482f716e5de13b7c5667b3c943dfb1c3e9136a043bd5ec7bf3" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.917824 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.927216 4990 generic.go:334] "Generic (PLEG): container finished" podID="2d400066-a9cb-4663-a398-9b2dfdeba85e" containerID="c4fc68382e82bbd65c9807f60fda89243ec97c97906e4b3b2e244ffc30c13392" exitCode=0 Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.927731 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-d2rxf" event={"ID":"2d400066-a9cb-4663-a398-9b2dfdeba85e","Type":"ContainerDied","Data":"c4fc68382e82bbd65c9807f60fda89243ec97c97906e4b3b2e244ffc30c13392"} Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.934365 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-e6ca-account-create-update-wzznq" podStartSLOduration=4.934347606 podStartE2EDuration="4.934347606s" podCreationTimestamp="2025-12-05 01:33:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:33:11.924060004 +0000 UTC m=+1490.300275365" watchObservedRunningTime="2025-12-05 01:33:11.934347606 +0000 UTC m=+1490.310562967" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.962745 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cc540f0-e077-4687-b2e4-5a0c268ce4a6" path="/var/lib/kubelet/pods/2cc540f0-e077-4687-b2e4-5a0c268ce4a6/volumes" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.963752 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6fb5b68-4f83-45ef-986e-a527b3ebca9e" path="/var/lib/kubelet/pods/e6fb5b68-4f83-45ef-986e-a527b3ebca9e/volumes" Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.969982 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4hxkr" event={"ID":"72c761da-2168-4966-b204-cddef6555a72","Type":"ContainerStarted","Data":"d3c29bdfe7061a79af10402934bcafa151f9c05ededba5e2abcfbccb51695e00"} Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.970009 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4hxkr" event={"ID":"72c761da-2168-4966-b204-cddef6555a72","Type":"ContainerStarted","Data":"4f5869215743e08399ab86f943452ca5f73c9c321ebae28414d98d3d2bee29fa"} Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.992805 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ea53-account-create-update-zvnfg" event={"ID":"61f41429-2e5c-4fa2-adca-fc89ad4f4175","Type":"ContainerStarted","Data":"6699c1e65bbec24e9ec9e853d5d34df75d7211700fe2400657624212ab90c757"} Dec 05 01:33:11 crc kubenswrapper[4990]: I1205 01:33:11.992848 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ea53-account-create-update-zvnfg" event={"ID":"61f41429-2e5c-4fa2-adca-fc89ad4f4175","Type":"ContainerStarted","Data":"c0da063f4a26e124c4acc645b3c6926d271fa9f3395d7c5f3efc48ff923d3a62"} Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.011986 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9fvnr" event={"ID":"fc1f8c91-37fa-4816-9340-f6345f60a6cf","Type":"ContainerStarted","Data":"bfc2d9d9367d1796c1c148e97af6bf3f14bd34ece3eba129aeb8e30655978558"} Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.012295 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9fvnr" event={"ID":"fc1f8c91-37fa-4816-9340-f6345f60a6cf","Type":"ContainerStarted","Data":"a873044c928e57215d70af2ec16a83ba7161d96034f92a9e015b3d2eeb716103"} Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.131674 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.214126 4990 scope.go:117] "RemoveContainer" containerID="c2f3a7ecfe85bc53d25f24925ad8fdbd1e77d2d5dbc23b3711c9b89b49e3ba37" Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.253597 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.294329 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.317234 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 01:33:12 crc kubenswrapper[4990]: E1205 01:33:12.317679 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7b19a4b-c55b-4845-ae65-09442bb0a29f" containerName="glance-log" Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.317692 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7b19a4b-c55b-4845-ae65-09442bb0a29f" containerName="glance-log" Dec 05 01:33:12 crc kubenswrapper[4990]: E1205 01:33:12.317710 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7b19a4b-c55b-4845-ae65-09442bb0a29f" containerName="glance-httpd" Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.317715 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7b19a4b-c55b-4845-ae65-09442bb0a29f" containerName="glance-httpd" Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.317890 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7b19a4b-c55b-4845-ae65-09442bb0a29f" containerName="glance-log" Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.317899 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7b19a4b-c55b-4845-ae65-09442bb0a29f" containerName="glance-httpd" Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.318846 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.321838 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.321990 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.340197 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.446160 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.446210 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq282\" (UniqueName: \"kubernetes.io/projected/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-kube-api-access-kq282\") pod \"glance-default-internal-api-0\" (UID: \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.446246 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.446270 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-logs\") pod \"glance-default-internal-api-0\" (UID: \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.446319 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.446338 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.446358 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.446380 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.548130 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq282\" (UniqueName: \"kubernetes.io/projected/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-kube-api-access-kq282\") pod \"glance-default-internal-api-0\" (UID: \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.548199 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.548238 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-logs\") pod \"glance-default-internal-api-0\" (UID: \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.548303 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.548324 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.548349 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.548376 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.548428 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.548794 4990 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.549334 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-logs\") pod \"glance-default-internal-api-0\" (UID: \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.549401 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.553526 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.553747 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.555330 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.560319 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.567044 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq282\" (UniqueName: \"kubernetes.io/projected/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-kube-api-access-kq282\") pod \"glance-default-internal-api-0\" (UID: \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.584895 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\") " pod="openstack/glance-default-internal-api-0" Dec 05 01:33:12 crc kubenswrapper[4990]: I1205 01:33:12.720594 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.047514 4990 generic.go:334] "Generic (PLEG): container finished" podID="72c761da-2168-4966-b204-cddef6555a72" containerID="d3c29bdfe7061a79af10402934bcafa151f9c05ededba5e2abcfbccb51695e00" exitCode=0 Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.048018 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4hxkr" event={"ID":"72c761da-2168-4966-b204-cddef6555a72","Type":"ContainerDied","Data":"d3c29bdfe7061a79af10402934bcafa151f9c05ededba5e2abcfbccb51695e00"} Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.051740 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d","Type":"ContainerStarted","Data":"d20b6d3367c7e8bc8e6b2c77a261707a5e35a27706e2fb7941de8c18a86ffb76"} Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.051783 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d","Type":"ContainerStarted","Data":"7ab3c9ce6977bb1403dbb8e5401d5aca03aed36fc403f684f23b40dd751bf40a"} Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.064512 4990 generic.go:334] "Generic (PLEG): container finished" podID="61f41429-2e5c-4fa2-adca-fc89ad4f4175" containerID="6699c1e65bbec24e9ec9e853d5d34df75d7211700fe2400657624212ab90c757" exitCode=0 Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.064565 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ea53-account-create-update-zvnfg" event={"ID":"61f41429-2e5c-4fa2-adca-fc89ad4f4175","Type":"ContainerDied","Data":"6699c1e65bbec24e9ec9e853d5d34df75d7211700fe2400657624212ab90c757"} Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.068856 4990 generic.go:334] "Generic (PLEG): container finished" podID="fc1f8c91-37fa-4816-9340-f6345f60a6cf" containerID="bfc2d9d9367d1796c1c148e97af6bf3f14bd34ece3eba129aeb8e30655978558" exitCode=0 Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.068911 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9fvnr" event={"ID":"fc1f8c91-37fa-4816-9340-f6345f60a6cf","Type":"ContainerDied","Data":"bfc2d9d9367d1796c1c148e97af6bf3f14bd34ece3eba129aeb8e30655978558"} Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.100835 4990 generic.go:334] "Generic (PLEG): container finished" podID="158b83e3-9326-447b-b100-3fc9f25383b2" containerID="2c37fd99ba8ed3daa5da82a0119a54cd33959a588276037e6fc12867a26ee9ed" exitCode=0 Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.100901 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e6ca-account-create-update-wzznq" event={"ID":"158b83e3-9326-447b-b100-3fc9f25383b2","Type":"ContainerDied","Data":"2c37fd99ba8ed3daa5da82a0119a54cd33959a588276037e6fc12867a26ee9ed"} Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.110335 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34c74f12-e4d3-44c7-87bb-79759b368059","Type":"ContainerStarted","Data":"ce4956be4d35df01cc3e171029d600573cf11a198bebc2bfe997983b3b4b4912"} Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.297514 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 01:33:13 crc kubenswrapper[4990]: W1205 01:33:13.348357 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47e78fdb_b9eb_4edf_9e4c_90831d0e4fb3.slice/crio-c8954ad9b082700a214ede2bb593871ca640480a4372bc2c794dbdcb9fbf4e60 WatchSource:0}: Error finding container c8954ad9b082700a214ede2bb593871ca640480a4372bc2c794dbdcb9fbf4e60: Status 404 returned error can't find the container with id c8954ad9b082700a214ede2bb593871ca640480a4372bc2c794dbdcb9fbf4e60 Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.477021 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4hxkr" Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.581997 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72c761da-2168-4966-b204-cddef6555a72-operator-scripts\") pod \"72c761da-2168-4966-b204-cddef6555a72\" (UID: \"72c761da-2168-4966-b204-cddef6555a72\") " Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.582309 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lchj8\" (UniqueName: \"kubernetes.io/projected/72c761da-2168-4966-b204-cddef6555a72-kube-api-access-lchj8\") pod \"72c761da-2168-4966-b204-cddef6555a72\" (UID: \"72c761da-2168-4966-b204-cddef6555a72\") " Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.587393 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72c761da-2168-4966-b204-cddef6555a72-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "72c761da-2168-4966-b204-cddef6555a72" (UID: "72c761da-2168-4966-b204-cddef6555a72"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.622759 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72c761da-2168-4966-b204-cddef6555a72-kube-api-access-lchj8" (OuterVolumeSpecName: "kube-api-access-lchj8") pod "72c761da-2168-4966-b204-cddef6555a72" (UID: "72c761da-2168-4966-b204-cddef6555a72"). InnerVolumeSpecName "kube-api-access-lchj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.688560 4990 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72c761da-2168-4966-b204-cddef6555a72-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.688588 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lchj8\" (UniqueName: \"kubernetes.io/projected/72c761da-2168-4966-b204-cddef6555a72-kube-api-access-lchj8\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.792275 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-d2rxf" Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.882383 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9fvnr" Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.889761 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ea53-account-create-update-zvnfg" Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.892092 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l67pz\" (UniqueName: \"kubernetes.io/projected/2d400066-a9cb-4663-a398-9b2dfdeba85e-kube-api-access-l67pz\") pod \"2d400066-a9cb-4663-a398-9b2dfdeba85e\" (UID: \"2d400066-a9cb-4663-a398-9b2dfdeba85e\") " Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.892201 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d400066-a9cb-4663-a398-9b2dfdeba85e-operator-scripts\") pod \"2d400066-a9cb-4663-a398-9b2dfdeba85e\" (UID: \"2d400066-a9cb-4663-a398-9b2dfdeba85e\") " Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.893112 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d400066-a9cb-4663-a398-9b2dfdeba85e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2d400066-a9cb-4663-a398-9b2dfdeba85e" (UID: "2d400066-a9cb-4663-a398-9b2dfdeba85e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.897955 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d400066-a9cb-4663-a398-9b2dfdeba85e-kube-api-access-l67pz" (OuterVolumeSpecName: "kube-api-access-l67pz") pod "2d400066-a9cb-4663-a398-9b2dfdeba85e" (UID: "2d400066-a9cb-4663-a398-9b2dfdeba85e"). InnerVolumeSpecName "kube-api-access-l67pz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.936029 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5b09-account-create-update-prgtc" Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.954028 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7b19a4b-c55b-4845-ae65-09442bb0a29f" path="/var/lib/kubelet/pods/f7b19a4b-c55b-4845-ae65-09442bb0a29f/volumes" Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.994332 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95552cfe-d576-4654-af71-fcd3c3c983ab-operator-scripts\") pod \"95552cfe-d576-4654-af71-fcd3c3c983ab\" (UID: \"95552cfe-d576-4654-af71-fcd3c3c983ab\") " Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.994868 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvr47\" (UniqueName: \"kubernetes.io/projected/fc1f8c91-37fa-4816-9340-f6345f60a6cf-kube-api-access-rvr47\") pod \"fc1f8c91-37fa-4816-9340-f6345f60a6cf\" (UID: \"fc1f8c91-37fa-4816-9340-f6345f60a6cf\") " Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.994889 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z92h6\" (UniqueName: \"kubernetes.io/projected/95552cfe-d576-4654-af71-fcd3c3c983ab-kube-api-access-z92h6\") pod \"95552cfe-d576-4654-af71-fcd3c3c983ab\" (UID: \"95552cfe-d576-4654-af71-fcd3c3c983ab\") " Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.994969 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc1f8c91-37fa-4816-9340-f6345f60a6cf-operator-scripts\") pod \"fc1f8c91-37fa-4816-9340-f6345f60a6cf\" (UID: \"fc1f8c91-37fa-4816-9340-f6345f60a6cf\") " Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.995037 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61f41429-2e5c-4fa2-adca-fc89ad4f4175-operator-scripts\") pod \"61f41429-2e5c-4fa2-adca-fc89ad4f4175\" (UID: \"61f41429-2e5c-4fa2-adca-fc89ad4f4175\") " Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.995095 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fck4\" (UniqueName: \"kubernetes.io/projected/61f41429-2e5c-4fa2-adca-fc89ad4f4175-kube-api-access-2fck4\") pod \"61f41429-2e5c-4fa2-adca-fc89ad4f4175\" (UID: \"61f41429-2e5c-4fa2-adca-fc89ad4f4175\") " Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.995536 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l67pz\" (UniqueName: \"kubernetes.io/projected/2d400066-a9cb-4663-a398-9b2dfdeba85e-kube-api-access-l67pz\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.995647 4990 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d400066-a9cb-4663-a398-9b2dfdeba85e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.996001 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc1f8c91-37fa-4816-9340-f6345f60a6cf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fc1f8c91-37fa-4816-9340-f6345f60a6cf" (UID: "fc1f8c91-37fa-4816-9340-f6345f60a6cf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.996314 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95552cfe-d576-4654-af71-fcd3c3c983ab-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "95552cfe-d576-4654-af71-fcd3c3c983ab" (UID: "95552cfe-d576-4654-af71-fcd3c3c983ab"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:33:13 crc kubenswrapper[4990]: I1205 01:33:13.996626 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61f41429-2e5c-4fa2-adca-fc89ad4f4175-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "61f41429-2e5c-4fa2-adca-fc89ad4f4175" (UID: "61f41429-2e5c-4fa2-adca-fc89ad4f4175"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.009685 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61f41429-2e5c-4fa2-adca-fc89ad4f4175-kube-api-access-2fck4" (OuterVolumeSpecName: "kube-api-access-2fck4") pod "61f41429-2e5c-4fa2-adca-fc89ad4f4175" (UID: "61f41429-2e5c-4fa2-adca-fc89ad4f4175"). InnerVolumeSpecName "kube-api-access-2fck4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.011604 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc1f8c91-37fa-4816-9340-f6345f60a6cf-kube-api-access-rvr47" (OuterVolumeSpecName: "kube-api-access-rvr47") pod "fc1f8c91-37fa-4816-9340-f6345f60a6cf" (UID: "fc1f8c91-37fa-4816-9340-f6345f60a6cf"). InnerVolumeSpecName "kube-api-access-rvr47". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.012238 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95552cfe-d576-4654-af71-fcd3c3c983ab-kube-api-access-z92h6" (OuterVolumeSpecName: "kube-api-access-z92h6") pod "95552cfe-d576-4654-af71-fcd3c3c983ab" (UID: "95552cfe-d576-4654-af71-fcd3c3c983ab"). InnerVolumeSpecName "kube-api-access-z92h6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.097180 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvr47\" (UniqueName: \"kubernetes.io/projected/fc1f8c91-37fa-4816-9340-f6345f60a6cf-kube-api-access-rvr47\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.097208 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z92h6\" (UniqueName: \"kubernetes.io/projected/95552cfe-d576-4654-af71-fcd3c3c983ab-kube-api-access-z92h6\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.097218 4990 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc1f8c91-37fa-4816-9340-f6345f60a6cf-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.097226 4990 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61f41429-2e5c-4fa2-adca-fc89ad4f4175-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.097234 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fck4\" (UniqueName: \"kubernetes.io/projected/61f41429-2e5c-4fa2-adca-fc89ad4f4175-kube-api-access-2fck4\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.097242 4990 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95552cfe-d576-4654-af71-fcd3c3c983ab-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.126606 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5b09-account-create-update-prgtc" Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.126667 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5b09-account-create-update-prgtc" event={"ID":"95552cfe-d576-4654-af71-fcd3c3c983ab","Type":"ContainerDied","Data":"c321d1f54858e1de25ff959c7711751772d460e8930b6b0f73599b1026723907"} Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.126914 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c321d1f54858e1de25ff959c7711751772d460e8930b6b0f73599b1026723907" Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.130454 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ea53-account-create-update-zvnfg" event={"ID":"61f41429-2e5c-4fa2-adca-fc89ad4f4175","Type":"ContainerDied","Data":"c0da063f4a26e124c4acc645b3c6926d271fa9f3395d7c5f3efc48ff923d3a62"} Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.130515 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0da063f4a26e124c4acc645b3c6926d271fa9f3395d7c5f3efc48ff923d3a62" Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.130544 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ea53-account-create-update-zvnfg" Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.133951 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9fvnr" event={"ID":"fc1f8c91-37fa-4816-9340-f6345f60a6cf","Type":"ContainerDied","Data":"a873044c928e57215d70af2ec16a83ba7161d96034f92a9e015b3d2eeb716103"} Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.133987 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a873044c928e57215d70af2ec16a83ba7161d96034f92a9e015b3d2eeb716103" Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.134059 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9fvnr" Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.137189 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34c74f12-e4d3-44c7-87bb-79759b368059","Type":"ContainerStarted","Data":"512a07df686b77d1cf587cb66a75dd38b6fc8d6c8a20b81260fe6f20fffa516a"} Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.141019 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3","Type":"ContainerStarted","Data":"c8954ad9b082700a214ede2bb593871ca640480a4372bc2c794dbdcb9fbf4e60"} Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.143381 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-d2rxf" event={"ID":"2d400066-a9cb-4663-a398-9b2dfdeba85e","Type":"ContainerDied","Data":"cc871341c1eac9bd976b052a2dcc82e2f69f2b401ea6d4c28a713ec2734deeb1"} Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.143563 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc871341c1eac9bd976b052a2dcc82e2f69f2b401ea6d4c28a713ec2734deeb1" Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.143723 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-d2rxf" Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.148214 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4hxkr" event={"ID":"72c761da-2168-4966-b204-cddef6555a72","Type":"ContainerDied","Data":"4f5869215743e08399ab86f943452ca5f73c9c321ebae28414d98d3d2bee29fa"} Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.148267 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f5869215743e08399ab86f943452ca5f73c9c321ebae28414d98d3d2bee29fa" Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.148339 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4hxkr" Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.161311 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d","Type":"ContainerStarted","Data":"fe46232d47a2817ca0f3b8f9049c81973cfa81de8d607b295f93e7368320d630"} Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.315451 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.315432598 podStartE2EDuration="4.315432598s" podCreationTimestamp="2025-12-05 01:33:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:33:14.20486669 +0000 UTC m=+1492.581082051" watchObservedRunningTime="2025-12-05 01:33:14.315432598 +0000 UTC m=+1492.691647959" Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.331331 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.540558 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e6ca-account-create-update-wzznq" Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.617098 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62qgv\" (UniqueName: \"kubernetes.io/projected/158b83e3-9326-447b-b100-3fc9f25383b2-kube-api-access-62qgv\") pod \"158b83e3-9326-447b-b100-3fc9f25383b2\" (UID: \"158b83e3-9326-447b-b100-3fc9f25383b2\") " Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.617563 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/158b83e3-9326-447b-b100-3fc9f25383b2-operator-scripts\") pod \"158b83e3-9326-447b-b100-3fc9f25383b2\" (UID: \"158b83e3-9326-447b-b100-3fc9f25383b2\") " Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.618923 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/158b83e3-9326-447b-b100-3fc9f25383b2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "158b83e3-9326-447b-b100-3fc9f25383b2" (UID: "158b83e3-9326-447b-b100-3fc9f25383b2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.621658 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/158b83e3-9326-447b-b100-3fc9f25383b2-kube-api-access-62qgv" (OuterVolumeSpecName: "kube-api-access-62qgv") pod "158b83e3-9326-447b-b100-3fc9f25383b2" (UID: "158b83e3-9326-447b-b100-3fc9f25383b2"). InnerVolumeSpecName "kube-api-access-62qgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.719271 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62qgv\" (UniqueName: \"kubernetes.io/projected/158b83e3-9326-447b-b100-3fc9f25383b2-kube-api-access-62qgv\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:14 crc kubenswrapper[4990]: I1205 01:33:14.719305 4990 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/158b83e3-9326-447b-b100-3fc9f25383b2-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:15 crc kubenswrapper[4990]: I1205 01:33:15.191578 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34c74f12-e4d3-44c7-87bb-79759b368059","Type":"ContainerStarted","Data":"8d88d7ac999e1d9336305193062e40b50e05db27558e79076c149feb57d9ca9a"} Dec 05 01:33:15 crc kubenswrapper[4990]: I1205 01:33:15.195615 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3","Type":"ContainerStarted","Data":"5f490bc39eb1a824091b54123c3705eafc0ddd4d3bb92d574be2d6b179034a7a"} Dec 05 01:33:15 crc kubenswrapper[4990]: I1205 01:33:15.195663 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3","Type":"ContainerStarted","Data":"82eb58ebe7ffe6cca157c0c411fe2fb1cb6d998e427d96fff54c39ba5fae459b"} Dec 05 01:33:15 crc kubenswrapper[4990]: I1205 01:33:15.198896 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e6ca-account-create-update-wzznq" Dec 05 01:33:15 crc kubenswrapper[4990]: I1205 01:33:15.199062 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e6ca-account-create-update-wzznq" event={"ID":"158b83e3-9326-447b-b100-3fc9f25383b2","Type":"ContainerDied","Data":"5b9c3f56c17078408b226417e1bf31822e57173e36c28cb09d72d428ce908367"} Dec 05 01:33:15 crc kubenswrapper[4990]: I1205 01:33:15.199124 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b9c3f56c17078408b226417e1bf31822e57173e36c28cb09d72d428ce908367" Dec 05 01:33:15 crc kubenswrapper[4990]: I1205 01:33:15.222691 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.22267432 podStartE2EDuration="3.22267432s" podCreationTimestamp="2025-12-05 01:33:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:33:15.216073213 +0000 UTC m=+1493.592288574" watchObservedRunningTime="2025-12-05 01:33:15.22267432 +0000 UTC m=+1493.598889681" Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.214617 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34c74f12-e4d3-44c7-87bb-79759b368059","Type":"ContainerStarted","Data":"ab77ae47dae64fca4a234998bbf10ca85be4d12784177e739a32c8a28adb9ef2"} Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.215049 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.214793 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="34c74f12-e4d3-44c7-87bb-79759b368059" containerName="proxy-httpd" containerID="cri-o://ab77ae47dae64fca4a234998bbf10ca85be4d12784177e739a32c8a28adb9ef2" gracePeriod=30 Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.214753 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="34c74f12-e4d3-44c7-87bb-79759b368059" containerName="ceilometer-central-agent" containerID="cri-o://ce4956be4d35df01cc3e171029d600573cf11a198bebc2bfe997983b3b4b4912" gracePeriod=30 Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.214808 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="34c74f12-e4d3-44c7-87bb-79759b368059" containerName="sg-core" containerID="cri-o://8d88d7ac999e1d9336305193062e40b50e05db27558e79076c149feb57d9ca9a" gracePeriod=30 Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.214868 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="34c74f12-e4d3-44c7-87bb-79759b368059" containerName="ceilometer-notification-agent" containerID="cri-o://512a07df686b77d1cf587cb66a75dd38b6fc8d6c8a20b81260fe6f20fffa516a" gracePeriod=30 Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.245561 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.504544088 podStartE2EDuration="7.245542367s" podCreationTimestamp="2025-12-05 01:33:10 +0000 UTC" firstStartedPulling="2025-12-05 01:33:11.843452967 +0000 UTC m=+1490.219668328" lastFinishedPulling="2025-12-05 01:33:15.584451246 +0000 UTC m=+1493.960666607" observedRunningTime="2025-12-05 01:33:17.241368359 +0000 UTC m=+1495.617583720" watchObservedRunningTime="2025-12-05 01:33:17.245542367 +0000 UTC m=+1495.621757748" Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.437527 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gmxzd"] Dec 05 01:33:17 crc kubenswrapper[4990]: E1205 01:33:17.438001 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95552cfe-d576-4654-af71-fcd3c3c983ab" containerName="mariadb-account-create-update" Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.438025 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="95552cfe-d576-4654-af71-fcd3c3c983ab" containerName="mariadb-account-create-update" Dec 05 01:33:17 crc kubenswrapper[4990]: E1205 01:33:17.438047 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc1f8c91-37fa-4816-9340-f6345f60a6cf" containerName="mariadb-database-create" Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.438057 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc1f8c91-37fa-4816-9340-f6345f60a6cf" containerName="mariadb-database-create" Dec 05 01:33:17 crc kubenswrapper[4990]: E1205 01:33:17.438073 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f41429-2e5c-4fa2-adca-fc89ad4f4175" containerName="mariadb-account-create-update" Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.438081 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f41429-2e5c-4fa2-adca-fc89ad4f4175" containerName="mariadb-account-create-update" Dec 05 01:33:17 crc kubenswrapper[4990]: E1205 01:33:17.438095 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72c761da-2168-4966-b204-cddef6555a72" containerName="mariadb-database-create" Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.438102 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="72c761da-2168-4966-b204-cddef6555a72" containerName="mariadb-database-create" Dec 05 01:33:17 crc kubenswrapper[4990]: E1205 01:33:17.438118 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="158b83e3-9326-447b-b100-3fc9f25383b2" containerName="mariadb-account-create-update" Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.438128 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="158b83e3-9326-447b-b100-3fc9f25383b2" containerName="mariadb-account-create-update" Dec 05 01:33:17 crc kubenswrapper[4990]: E1205 01:33:17.438136 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d400066-a9cb-4663-a398-9b2dfdeba85e" containerName="mariadb-database-create" Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.438143 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d400066-a9cb-4663-a398-9b2dfdeba85e" containerName="mariadb-database-create" Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.438383 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="61f41429-2e5c-4fa2-adca-fc89ad4f4175" containerName="mariadb-account-create-update" Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.438416 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="95552cfe-d576-4654-af71-fcd3c3c983ab" containerName="mariadb-account-create-update" Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.438428 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d400066-a9cb-4663-a398-9b2dfdeba85e" containerName="mariadb-database-create" Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.438442 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc1f8c91-37fa-4816-9340-f6345f60a6cf" containerName="mariadb-database-create" Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.438459 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="72c761da-2168-4966-b204-cddef6555a72" containerName="mariadb-database-create" Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.438475 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="158b83e3-9326-447b-b100-3fc9f25383b2" containerName="mariadb-account-create-update" Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.439167 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gmxzd" Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.441226 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-w5ffx" Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.443129 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.444996 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.447837 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gmxzd"] Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.571341 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74ff5181-d9ea-4726-97f0-3d62935f4949-scripts\") pod \"nova-cell0-conductor-db-sync-gmxzd\" (UID: \"74ff5181-d9ea-4726-97f0-3d62935f4949\") " pod="openstack/nova-cell0-conductor-db-sync-gmxzd" Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.591506 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwcnt\" (UniqueName: \"kubernetes.io/projected/74ff5181-d9ea-4726-97f0-3d62935f4949-kube-api-access-rwcnt\") pod \"nova-cell0-conductor-db-sync-gmxzd\" (UID: \"74ff5181-d9ea-4726-97f0-3d62935f4949\") " pod="openstack/nova-cell0-conductor-db-sync-gmxzd" Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.591613 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74ff5181-d9ea-4726-97f0-3d62935f4949-config-data\") pod \"nova-cell0-conductor-db-sync-gmxzd\" (UID: \"74ff5181-d9ea-4726-97f0-3d62935f4949\") " pod="openstack/nova-cell0-conductor-db-sync-gmxzd" Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.591676 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74ff5181-d9ea-4726-97f0-3d62935f4949-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gmxzd\" (UID: \"74ff5181-d9ea-4726-97f0-3d62935f4949\") " pod="openstack/nova-cell0-conductor-db-sync-gmxzd" Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.694849 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwcnt\" (UniqueName: \"kubernetes.io/projected/74ff5181-d9ea-4726-97f0-3d62935f4949-kube-api-access-rwcnt\") pod \"nova-cell0-conductor-db-sync-gmxzd\" (UID: \"74ff5181-d9ea-4726-97f0-3d62935f4949\") " pod="openstack/nova-cell0-conductor-db-sync-gmxzd" Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.694916 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74ff5181-d9ea-4726-97f0-3d62935f4949-config-data\") pod \"nova-cell0-conductor-db-sync-gmxzd\" (UID: \"74ff5181-d9ea-4726-97f0-3d62935f4949\") " pod="openstack/nova-cell0-conductor-db-sync-gmxzd" Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.694961 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74ff5181-d9ea-4726-97f0-3d62935f4949-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gmxzd\" (UID: \"74ff5181-d9ea-4726-97f0-3d62935f4949\") " pod="openstack/nova-cell0-conductor-db-sync-gmxzd" Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.695021 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74ff5181-d9ea-4726-97f0-3d62935f4949-scripts\") pod \"nova-cell0-conductor-db-sync-gmxzd\" (UID: \"74ff5181-d9ea-4726-97f0-3d62935f4949\") " pod="openstack/nova-cell0-conductor-db-sync-gmxzd" Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.701047 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74ff5181-d9ea-4726-97f0-3d62935f4949-scripts\") pod \"nova-cell0-conductor-db-sync-gmxzd\" (UID: \"74ff5181-d9ea-4726-97f0-3d62935f4949\") " pod="openstack/nova-cell0-conductor-db-sync-gmxzd" Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.701458 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74ff5181-d9ea-4726-97f0-3d62935f4949-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gmxzd\" (UID: \"74ff5181-d9ea-4726-97f0-3d62935f4949\") " pod="openstack/nova-cell0-conductor-db-sync-gmxzd" Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.702084 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74ff5181-d9ea-4726-97f0-3d62935f4949-config-data\") pod \"nova-cell0-conductor-db-sync-gmxzd\" (UID: \"74ff5181-d9ea-4726-97f0-3d62935f4949\") " pod="openstack/nova-cell0-conductor-db-sync-gmxzd" Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.710363 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwcnt\" (UniqueName: \"kubernetes.io/projected/74ff5181-d9ea-4726-97f0-3d62935f4949-kube-api-access-rwcnt\") pod \"nova-cell0-conductor-db-sync-gmxzd\" (UID: \"74ff5181-d9ea-4726-97f0-3d62935f4949\") " pod="openstack/nova-cell0-conductor-db-sync-gmxzd" Dec 05 01:33:17 crc kubenswrapper[4990]: I1205 01:33:17.786809 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gmxzd" Dec 05 01:33:18 crc kubenswrapper[4990]: I1205 01:33:18.214977 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gmxzd"] Dec 05 01:33:18 crc kubenswrapper[4990]: W1205 01:33:18.218310 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74ff5181_d9ea_4726_97f0_3d62935f4949.slice/crio-b817e06226e76ce9a34c88a436ce03adbf3be31fce4808191a48cc7a7c2c1c04 WatchSource:0}: Error finding container b817e06226e76ce9a34c88a436ce03adbf3be31fce4808191a48cc7a7c2c1c04: Status 404 returned error can't find the container with id b817e06226e76ce9a34c88a436ce03adbf3be31fce4808191a48cc7a7c2c1c04 Dec 05 01:33:18 crc kubenswrapper[4990]: I1205 01:33:18.226263 4990 generic.go:334] "Generic (PLEG): container finished" podID="34c74f12-e4d3-44c7-87bb-79759b368059" containerID="ab77ae47dae64fca4a234998bbf10ca85be4d12784177e739a32c8a28adb9ef2" exitCode=0 Dec 05 01:33:18 crc kubenswrapper[4990]: I1205 01:33:18.226291 4990 generic.go:334] "Generic (PLEG): container finished" podID="34c74f12-e4d3-44c7-87bb-79759b368059" containerID="8d88d7ac999e1d9336305193062e40b50e05db27558e79076c149feb57d9ca9a" exitCode=2 Dec 05 01:33:18 crc kubenswrapper[4990]: I1205 01:33:18.226305 4990 generic.go:334] "Generic (PLEG): container finished" podID="34c74f12-e4d3-44c7-87bb-79759b368059" containerID="512a07df686b77d1cf587cb66a75dd38b6fc8d6c8a20b81260fe6f20fffa516a" exitCode=0 Dec 05 01:33:18 crc kubenswrapper[4990]: I1205 01:33:18.226323 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34c74f12-e4d3-44c7-87bb-79759b368059","Type":"ContainerDied","Data":"ab77ae47dae64fca4a234998bbf10ca85be4d12784177e739a32c8a28adb9ef2"} Dec 05 01:33:18 crc kubenswrapper[4990]: I1205 01:33:18.226351 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34c74f12-e4d3-44c7-87bb-79759b368059","Type":"ContainerDied","Data":"8d88d7ac999e1d9336305193062e40b50e05db27558e79076c149feb57d9ca9a"} Dec 05 01:33:18 crc kubenswrapper[4990]: I1205 01:33:18.226364 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34c74f12-e4d3-44c7-87bb-79759b368059","Type":"ContainerDied","Data":"512a07df686b77d1cf587cb66a75dd38b6fc8d6c8a20b81260fe6f20fffa516a"} Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.086066 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.222757 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34c74f12-e4d3-44c7-87bb-79759b368059-config-data\") pod \"34c74f12-e4d3-44c7-87bb-79759b368059\" (UID: \"34c74f12-e4d3-44c7-87bb-79759b368059\") " Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.222830 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34c74f12-e4d3-44c7-87bb-79759b368059-log-httpd\") pod \"34c74f12-e4d3-44c7-87bb-79759b368059\" (UID: \"34c74f12-e4d3-44c7-87bb-79759b368059\") " Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.222854 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vmpv\" (UniqueName: \"kubernetes.io/projected/34c74f12-e4d3-44c7-87bb-79759b368059-kube-api-access-6vmpv\") pod \"34c74f12-e4d3-44c7-87bb-79759b368059\" (UID: \"34c74f12-e4d3-44c7-87bb-79759b368059\") " Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.222904 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34c74f12-e4d3-44c7-87bb-79759b368059-sg-core-conf-yaml\") pod \"34c74f12-e4d3-44c7-87bb-79759b368059\" (UID: \"34c74f12-e4d3-44c7-87bb-79759b368059\") " Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.222952 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34c74f12-e4d3-44c7-87bb-79759b368059-scripts\") pod \"34c74f12-e4d3-44c7-87bb-79759b368059\" (UID: \"34c74f12-e4d3-44c7-87bb-79759b368059\") " Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.222991 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34c74f12-e4d3-44c7-87bb-79759b368059-combined-ca-bundle\") pod \"34c74f12-e4d3-44c7-87bb-79759b368059\" (UID: \"34c74f12-e4d3-44c7-87bb-79759b368059\") " Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.223060 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34c74f12-e4d3-44c7-87bb-79759b368059-run-httpd\") pod \"34c74f12-e4d3-44c7-87bb-79759b368059\" (UID: \"34c74f12-e4d3-44c7-87bb-79759b368059\") " Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.223174 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34c74f12-e4d3-44c7-87bb-79759b368059-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "34c74f12-e4d3-44c7-87bb-79759b368059" (UID: "34c74f12-e4d3-44c7-87bb-79759b368059"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.223397 4990 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34c74f12-e4d3-44c7-87bb-79759b368059-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.223405 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34c74f12-e4d3-44c7-87bb-79759b368059-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "34c74f12-e4d3-44c7-87bb-79759b368059" (UID: "34c74f12-e4d3-44c7-87bb-79759b368059"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.228388 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34c74f12-e4d3-44c7-87bb-79759b368059-scripts" (OuterVolumeSpecName: "scripts") pod "34c74f12-e4d3-44c7-87bb-79759b368059" (UID: "34c74f12-e4d3-44c7-87bb-79759b368059"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.228429 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34c74f12-e4d3-44c7-87bb-79759b368059-kube-api-access-6vmpv" (OuterVolumeSpecName: "kube-api-access-6vmpv") pod "34c74f12-e4d3-44c7-87bb-79759b368059" (UID: "34c74f12-e4d3-44c7-87bb-79759b368059"). InnerVolumeSpecName "kube-api-access-6vmpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.246788 4990 generic.go:334] "Generic (PLEG): container finished" podID="34c74f12-e4d3-44c7-87bb-79759b368059" containerID="ce4956be4d35df01cc3e171029d600573cf11a198bebc2bfe997983b3b4b4912" exitCode=0 Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.247421 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34c74f12-e4d3-44c7-87bb-79759b368059","Type":"ContainerDied","Data":"ce4956be4d35df01cc3e171029d600573cf11a198bebc2bfe997983b3b4b4912"} Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.247574 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34c74f12-e4d3-44c7-87bb-79759b368059","Type":"ContainerDied","Data":"731de6d85eb00d92a5908099905384e5728bdea5b9e737a7cc73317f67edfd0f"} Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.247669 4990 scope.go:117] "RemoveContainer" containerID="ab77ae47dae64fca4a234998bbf10ca85be4d12784177e739a32c8a28adb9ef2" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.247966 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.260755 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gmxzd" event={"ID":"74ff5181-d9ea-4726-97f0-3d62935f4949","Type":"ContainerStarted","Data":"b817e06226e76ce9a34c88a436ce03adbf3be31fce4808191a48cc7a7c2c1c04"} Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.281424 4990 scope.go:117] "RemoveContainer" containerID="8d88d7ac999e1d9336305193062e40b50e05db27558e79076c149feb57d9ca9a" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.291663 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34c74f12-e4d3-44c7-87bb-79759b368059-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "34c74f12-e4d3-44c7-87bb-79759b368059" (UID: "34c74f12-e4d3-44c7-87bb-79759b368059"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.301016 4990 scope.go:117] "RemoveContainer" containerID="512a07df686b77d1cf587cb66a75dd38b6fc8d6c8a20b81260fe6f20fffa516a" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.315093 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34c74f12-e4d3-44c7-87bb-79759b368059-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34c74f12-e4d3-44c7-87bb-79759b368059" (UID: "34c74f12-e4d3-44c7-87bb-79759b368059"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.323315 4990 scope.go:117] "RemoveContainer" containerID="ce4956be4d35df01cc3e171029d600573cf11a198bebc2bfe997983b3b4b4912" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.324661 4990 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34c74f12-e4d3-44c7-87bb-79759b368059-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.324692 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vmpv\" (UniqueName: \"kubernetes.io/projected/34c74f12-e4d3-44c7-87bb-79759b368059-kube-api-access-6vmpv\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.324704 4990 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34c74f12-e4d3-44c7-87bb-79759b368059-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.324712 4990 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34c74f12-e4d3-44c7-87bb-79759b368059-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.324719 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34c74f12-e4d3-44c7-87bb-79759b368059-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.331816 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34c74f12-e4d3-44c7-87bb-79759b368059-config-data" (OuterVolumeSpecName: "config-data") pod "34c74f12-e4d3-44c7-87bb-79759b368059" (UID: "34c74f12-e4d3-44c7-87bb-79759b368059"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.426740 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34c74f12-e4d3-44c7-87bb-79759b368059-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.430095 4990 scope.go:117] "RemoveContainer" containerID="ab77ae47dae64fca4a234998bbf10ca85be4d12784177e739a32c8a28adb9ef2" Dec 05 01:33:19 crc kubenswrapper[4990]: E1205 01:33:19.430616 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab77ae47dae64fca4a234998bbf10ca85be4d12784177e739a32c8a28adb9ef2\": container with ID starting with ab77ae47dae64fca4a234998bbf10ca85be4d12784177e739a32c8a28adb9ef2 not found: ID does not exist" containerID="ab77ae47dae64fca4a234998bbf10ca85be4d12784177e739a32c8a28adb9ef2" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.430686 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab77ae47dae64fca4a234998bbf10ca85be4d12784177e739a32c8a28adb9ef2"} err="failed to get container status \"ab77ae47dae64fca4a234998bbf10ca85be4d12784177e739a32c8a28adb9ef2\": rpc error: code = NotFound desc = could not find container \"ab77ae47dae64fca4a234998bbf10ca85be4d12784177e739a32c8a28adb9ef2\": container with ID starting with ab77ae47dae64fca4a234998bbf10ca85be4d12784177e739a32c8a28adb9ef2 not found: ID does not exist" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.430719 4990 scope.go:117] "RemoveContainer" containerID="8d88d7ac999e1d9336305193062e40b50e05db27558e79076c149feb57d9ca9a" Dec 05 01:33:19 crc kubenswrapper[4990]: E1205 01:33:19.431024 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d88d7ac999e1d9336305193062e40b50e05db27558e79076c149feb57d9ca9a\": container with ID starting with 8d88d7ac999e1d9336305193062e40b50e05db27558e79076c149feb57d9ca9a not found: ID does not exist" containerID="8d88d7ac999e1d9336305193062e40b50e05db27558e79076c149feb57d9ca9a" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.431053 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d88d7ac999e1d9336305193062e40b50e05db27558e79076c149feb57d9ca9a"} err="failed to get container status \"8d88d7ac999e1d9336305193062e40b50e05db27558e79076c149feb57d9ca9a\": rpc error: code = NotFound desc = could not find container \"8d88d7ac999e1d9336305193062e40b50e05db27558e79076c149feb57d9ca9a\": container with ID starting with 8d88d7ac999e1d9336305193062e40b50e05db27558e79076c149feb57d9ca9a not found: ID does not exist" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.431071 4990 scope.go:117] "RemoveContainer" containerID="512a07df686b77d1cf587cb66a75dd38b6fc8d6c8a20b81260fe6f20fffa516a" Dec 05 01:33:19 crc kubenswrapper[4990]: E1205 01:33:19.431336 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"512a07df686b77d1cf587cb66a75dd38b6fc8d6c8a20b81260fe6f20fffa516a\": container with ID starting with 512a07df686b77d1cf587cb66a75dd38b6fc8d6c8a20b81260fe6f20fffa516a not found: ID does not exist" containerID="512a07df686b77d1cf587cb66a75dd38b6fc8d6c8a20b81260fe6f20fffa516a" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.431394 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"512a07df686b77d1cf587cb66a75dd38b6fc8d6c8a20b81260fe6f20fffa516a"} err="failed to get container status \"512a07df686b77d1cf587cb66a75dd38b6fc8d6c8a20b81260fe6f20fffa516a\": rpc error: code = NotFound desc = could not find container \"512a07df686b77d1cf587cb66a75dd38b6fc8d6c8a20b81260fe6f20fffa516a\": container with ID starting with 512a07df686b77d1cf587cb66a75dd38b6fc8d6c8a20b81260fe6f20fffa516a not found: ID does not exist" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.431438 4990 scope.go:117] "RemoveContainer" containerID="ce4956be4d35df01cc3e171029d600573cf11a198bebc2bfe997983b3b4b4912" Dec 05 01:33:19 crc kubenswrapper[4990]: E1205 01:33:19.431945 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce4956be4d35df01cc3e171029d600573cf11a198bebc2bfe997983b3b4b4912\": container with ID starting with ce4956be4d35df01cc3e171029d600573cf11a198bebc2bfe997983b3b4b4912 not found: ID does not exist" containerID="ce4956be4d35df01cc3e171029d600573cf11a198bebc2bfe997983b3b4b4912" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.431983 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce4956be4d35df01cc3e171029d600573cf11a198bebc2bfe997983b3b4b4912"} err="failed to get container status \"ce4956be4d35df01cc3e171029d600573cf11a198bebc2bfe997983b3b4b4912\": rpc error: code = NotFound desc = could not find container \"ce4956be4d35df01cc3e171029d600573cf11a198bebc2bfe997983b3b4b4912\": container with ID starting with ce4956be4d35df01cc3e171029d600573cf11a198bebc2bfe997983b3b4b4912 not found: ID does not exist" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.577766 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.591364 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.618804 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:33:19 crc kubenswrapper[4990]: E1205 01:33:19.619129 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34c74f12-e4d3-44c7-87bb-79759b368059" containerName="ceilometer-notification-agent" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.619145 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="34c74f12-e4d3-44c7-87bb-79759b368059" containerName="ceilometer-notification-agent" Dec 05 01:33:19 crc kubenswrapper[4990]: E1205 01:33:19.619155 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34c74f12-e4d3-44c7-87bb-79759b368059" containerName="proxy-httpd" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.619161 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="34c74f12-e4d3-44c7-87bb-79759b368059" containerName="proxy-httpd" Dec 05 01:33:19 crc kubenswrapper[4990]: E1205 01:33:19.619193 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34c74f12-e4d3-44c7-87bb-79759b368059" containerName="sg-core" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.619200 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="34c74f12-e4d3-44c7-87bb-79759b368059" containerName="sg-core" Dec 05 01:33:19 crc kubenswrapper[4990]: E1205 01:33:19.619210 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34c74f12-e4d3-44c7-87bb-79759b368059" containerName="ceilometer-central-agent" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.619215 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="34c74f12-e4d3-44c7-87bb-79759b368059" containerName="ceilometer-central-agent" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.619372 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="34c74f12-e4d3-44c7-87bb-79759b368059" containerName="sg-core" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.619386 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="34c74f12-e4d3-44c7-87bb-79759b368059" containerName="proxy-httpd" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.619396 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="34c74f12-e4d3-44c7-87bb-79759b368059" containerName="ceilometer-notification-agent" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.619410 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="34c74f12-e4d3-44c7-87bb-79759b368059" containerName="ceilometer-central-agent" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.621438 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.623438 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.623641 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.640089 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.730575 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-config-data\") pod \"ceilometer-0\" (UID: \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\") " pod="openstack/ceilometer-0" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.730805 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-log-httpd\") pod \"ceilometer-0\" (UID: \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\") " pod="openstack/ceilometer-0" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.730915 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\") " pod="openstack/ceilometer-0" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.731035 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-scripts\") pod \"ceilometer-0\" (UID: \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\") " pod="openstack/ceilometer-0" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.731125 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-run-httpd\") pod \"ceilometer-0\" (UID: \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\") " pod="openstack/ceilometer-0" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.731232 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\") " pod="openstack/ceilometer-0" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.731301 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfzm8\" (UniqueName: \"kubernetes.io/projected/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-kube-api-access-zfzm8\") pod \"ceilometer-0\" (UID: \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\") " pod="openstack/ceilometer-0" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.738858 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-84997d8dc-hzdlp" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.739911 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-84997d8dc-hzdlp" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.832854 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-scripts\") pod \"ceilometer-0\" (UID: \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\") " pod="openstack/ceilometer-0" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.832911 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-run-httpd\") pod \"ceilometer-0\" (UID: \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\") " pod="openstack/ceilometer-0" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.832953 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\") " pod="openstack/ceilometer-0" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.832973 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfzm8\" (UniqueName: \"kubernetes.io/projected/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-kube-api-access-zfzm8\") pod \"ceilometer-0\" (UID: \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\") " pod="openstack/ceilometer-0" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.832998 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-config-data\") pod \"ceilometer-0\" (UID: \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\") " pod="openstack/ceilometer-0" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.833027 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-log-httpd\") pod \"ceilometer-0\" (UID: \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\") " pod="openstack/ceilometer-0" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.833074 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\") " pod="openstack/ceilometer-0" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.833654 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-run-httpd\") pod \"ceilometer-0\" (UID: \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\") " pod="openstack/ceilometer-0" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.836024 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-log-httpd\") pod \"ceilometer-0\" (UID: \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\") " pod="openstack/ceilometer-0" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.838733 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\") " pod="openstack/ceilometer-0" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.839777 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-scripts\") pod \"ceilometer-0\" (UID: \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\") " pod="openstack/ceilometer-0" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.840216 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-config-data\") pod \"ceilometer-0\" (UID: \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\") " pod="openstack/ceilometer-0" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.843585 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\") " pod="openstack/ceilometer-0" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.853069 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfzm8\" (UniqueName: \"kubernetes.io/projected/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-kube-api-access-zfzm8\") pod \"ceilometer-0\" (UID: \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\") " pod="openstack/ceilometer-0" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.946081 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 01:33:19 crc kubenswrapper[4990]: I1205 01:33:19.954650 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34c74f12-e4d3-44c7-87bb-79759b368059" path="/var/lib/kubelet/pods/34c74f12-e4d3-44c7-87bb-79759b368059/volumes" Dec 05 01:33:20 crc kubenswrapper[4990]: I1205 01:33:20.313090 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:33:21 crc kubenswrapper[4990]: I1205 01:33:21.285551 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f6ac6fb7-9b42-49a0-af39-7d3d2f507781","Type":"ContainerStarted","Data":"c6dea0f524a22a2afdf7bb011bb248fddaf9b199da3864038fd0ae5a2ede6855"} Dec 05 01:33:21 crc kubenswrapper[4990]: I1205 01:33:21.285826 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f6ac6fb7-9b42-49a0-af39-7d3d2f507781","Type":"ContainerStarted","Data":"4dce1b97b439b3059d0ab51e852eb76a1ba1e334604cc3166a9eda8da28a46d4"} Dec 05 01:33:21 crc kubenswrapper[4990]: I1205 01:33:21.451811 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 05 01:33:21 crc kubenswrapper[4990]: I1205 01:33:21.451868 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 05 01:33:21 crc kubenswrapper[4990]: I1205 01:33:21.495883 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 05 01:33:21 crc kubenswrapper[4990]: I1205 01:33:21.495954 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 05 01:33:22 crc kubenswrapper[4990]: I1205 01:33:22.296058 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f6ac6fb7-9b42-49a0-af39-7d3d2f507781","Type":"ContainerStarted","Data":"d126904099e4ca683ebce0f47028c10b2372893d2ee437219029228f94a2aa6b"} Dec 05 01:33:22 crc kubenswrapper[4990]: I1205 01:33:22.296691 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 05 01:33:22 crc kubenswrapper[4990]: I1205 01:33:22.296710 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 05 01:33:22 crc kubenswrapper[4990]: I1205 01:33:22.721088 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 05 01:33:22 crc kubenswrapper[4990]: I1205 01:33:22.722454 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 05 01:33:22 crc kubenswrapper[4990]: I1205 01:33:22.749190 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 05 01:33:22 crc kubenswrapper[4990]: I1205 01:33:22.759290 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 05 01:33:23 crc kubenswrapper[4990]: I1205 01:33:23.312952 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 05 01:33:23 crc kubenswrapper[4990]: I1205 01:33:23.312984 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 05 01:33:23 crc kubenswrapper[4990]: I1205 01:33:23.942847 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:33:24 crc kubenswrapper[4990]: I1205 01:33:24.214760 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 05 01:33:24 crc kubenswrapper[4990]: I1205 01:33:24.219701 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 05 01:33:25 crc kubenswrapper[4990]: I1205 01:33:25.330940 4990 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 01:33:25 crc kubenswrapper[4990]: I1205 01:33:25.331264 4990 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 01:33:25 crc kubenswrapper[4990]: I1205 01:33:25.468535 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 05 01:33:25 crc kubenswrapper[4990]: I1205 01:33:25.472029 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 05 01:33:30 crc kubenswrapper[4990]: I1205 01:33:30.397626 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gmxzd" event={"ID":"74ff5181-d9ea-4726-97f0-3d62935f4949","Type":"ContainerStarted","Data":"84eda9e236cf04f0c78610e9e447f88ae6f7b12e26fe33aa20954f632cbbdd2c"} Dec 05 01:33:30 crc kubenswrapper[4990]: I1205 01:33:30.400188 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f6ac6fb7-9b42-49a0-af39-7d3d2f507781","Type":"ContainerStarted","Data":"fe871536800ec91d7b0c48f69f920a7afbce179b56f000d7fddc56d0208a228d"} Dec 05 01:33:31 crc kubenswrapper[4990]: I1205 01:33:31.411530 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f6ac6fb7-9b42-49a0-af39-7d3d2f507781","Type":"ContainerStarted","Data":"10bba521aeb9b675eb43e056667bd466b86323a66629d9abc6bec51bc51bcc6d"} Dec 05 01:33:31 crc kubenswrapper[4990]: I1205 01:33:31.411789 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f6ac6fb7-9b42-49a0-af39-7d3d2f507781" containerName="ceilometer-notification-agent" containerID="cri-o://d126904099e4ca683ebce0f47028c10b2372893d2ee437219029228f94a2aa6b" gracePeriod=30 Dec 05 01:33:31 crc kubenswrapper[4990]: I1205 01:33:31.411757 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f6ac6fb7-9b42-49a0-af39-7d3d2f507781" containerName="sg-core" containerID="cri-o://fe871536800ec91d7b0c48f69f920a7afbce179b56f000d7fddc56d0208a228d" gracePeriod=30 Dec 05 01:33:31 crc kubenswrapper[4990]: I1205 01:33:31.411725 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f6ac6fb7-9b42-49a0-af39-7d3d2f507781" containerName="ceilometer-central-agent" containerID="cri-o://c6dea0f524a22a2afdf7bb011bb248fddaf9b199da3864038fd0ae5a2ede6855" gracePeriod=30 Dec 05 01:33:31 crc kubenswrapper[4990]: I1205 01:33:31.412023 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 01:33:31 crc kubenswrapper[4990]: I1205 01:33:31.412898 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f6ac6fb7-9b42-49a0-af39-7d3d2f507781" containerName="proxy-httpd" containerID="cri-o://10bba521aeb9b675eb43e056667bd466b86323a66629d9abc6bec51bc51bcc6d" gracePeriod=30 Dec 05 01:33:31 crc kubenswrapper[4990]: I1205 01:33:31.441202 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-gmxzd" podStartSLOduration=3.364380713 podStartE2EDuration="14.441169039s" podCreationTimestamp="2025-12-05 01:33:17 +0000 UTC" firstStartedPulling="2025-12-05 01:33:18.222839198 +0000 UTC m=+1496.599054559" lastFinishedPulling="2025-12-05 01:33:29.299627524 +0000 UTC m=+1507.675842885" observedRunningTime="2025-12-05 01:33:30.420839118 +0000 UTC m=+1508.797054479" watchObservedRunningTime="2025-12-05 01:33:31.441169039 +0000 UTC m=+1509.817384400" Dec 05 01:33:31 crc kubenswrapper[4990]: I1205 01:33:31.443060 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.174405096 podStartE2EDuration="12.443052332s" podCreationTimestamp="2025-12-05 01:33:19 +0000 UTC" firstStartedPulling="2025-12-05 01:33:20.329089192 +0000 UTC m=+1498.705304553" lastFinishedPulling="2025-12-05 01:33:30.597736418 +0000 UTC m=+1508.973951789" observedRunningTime="2025-12-05 01:33:31.438420481 +0000 UTC m=+1509.814635842" watchObservedRunningTime="2025-12-05 01:33:31.443052332 +0000 UTC m=+1509.819267693" Dec 05 01:33:32 crc kubenswrapper[4990]: I1205 01:33:32.424249 4990 generic.go:334] "Generic (PLEG): container finished" podID="f6ac6fb7-9b42-49a0-af39-7d3d2f507781" containerID="10bba521aeb9b675eb43e056667bd466b86323a66629d9abc6bec51bc51bcc6d" exitCode=0 Dec 05 01:33:32 crc kubenswrapper[4990]: I1205 01:33:32.424624 4990 generic.go:334] "Generic (PLEG): container finished" podID="f6ac6fb7-9b42-49a0-af39-7d3d2f507781" containerID="fe871536800ec91d7b0c48f69f920a7afbce179b56f000d7fddc56d0208a228d" exitCode=2 Dec 05 01:33:32 crc kubenswrapper[4990]: I1205 01:33:32.424640 4990 generic.go:334] "Generic (PLEG): container finished" podID="f6ac6fb7-9b42-49a0-af39-7d3d2f507781" containerID="c6dea0f524a22a2afdf7bb011bb248fddaf9b199da3864038fd0ae5a2ede6855" exitCode=0 Dec 05 01:33:32 crc kubenswrapper[4990]: I1205 01:33:32.424329 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f6ac6fb7-9b42-49a0-af39-7d3d2f507781","Type":"ContainerDied","Data":"10bba521aeb9b675eb43e056667bd466b86323a66629d9abc6bec51bc51bcc6d"} Dec 05 01:33:32 crc kubenswrapper[4990]: I1205 01:33:32.424679 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f6ac6fb7-9b42-49a0-af39-7d3d2f507781","Type":"ContainerDied","Data":"fe871536800ec91d7b0c48f69f920a7afbce179b56f000d7fddc56d0208a228d"} Dec 05 01:33:32 crc kubenswrapper[4990]: I1205 01:33:32.424694 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f6ac6fb7-9b42-49a0-af39-7d3d2f507781","Type":"ContainerDied","Data":"c6dea0f524a22a2afdf7bb011bb248fddaf9b199da3864038fd0ae5a2ede6855"} Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.124082 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.297072 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-scripts\") pod \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\" (UID: \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\") " Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.297409 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-combined-ca-bundle\") pod \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\" (UID: \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\") " Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.297434 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-sg-core-conf-yaml\") pod \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\" (UID: \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\") " Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.297460 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfzm8\" (UniqueName: \"kubernetes.io/projected/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-kube-api-access-zfzm8\") pod \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\" (UID: \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\") " Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.297526 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-run-httpd\") pod \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\" (UID: \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\") " Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.297614 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-log-httpd\") pod \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\" (UID: \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\") " Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.297654 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-config-data\") pod \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\" (UID: \"f6ac6fb7-9b42-49a0-af39-7d3d2f507781\") " Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.298539 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f6ac6fb7-9b42-49a0-af39-7d3d2f507781" (UID: "f6ac6fb7-9b42-49a0-af39-7d3d2f507781"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.298847 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f6ac6fb7-9b42-49a0-af39-7d3d2f507781" (UID: "f6ac6fb7-9b42-49a0-af39-7d3d2f507781"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.304705 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-kube-api-access-zfzm8" (OuterVolumeSpecName: "kube-api-access-zfzm8") pod "f6ac6fb7-9b42-49a0-af39-7d3d2f507781" (UID: "f6ac6fb7-9b42-49a0-af39-7d3d2f507781"). InnerVolumeSpecName "kube-api-access-zfzm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.317860 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-scripts" (OuterVolumeSpecName: "scripts") pod "f6ac6fb7-9b42-49a0-af39-7d3d2f507781" (UID: "f6ac6fb7-9b42-49a0-af39-7d3d2f507781"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.331232 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f6ac6fb7-9b42-49a0-af39-7d3d2f507781" (UID: "f6ac6fb7-9b42-49a0-af39-7d3d2f507781"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.378644 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6ac6fb7-9b42-49a0-af39-7d3d2f507781" (UID: "f6ac6fb7-9b42-49a0-af39-7d3d2f507781"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.399570 4990 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.399598 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.399608 4990 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.399616 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfzm8\" (UniqueName: \"kubernetes.io/projected/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-kube-api-access-zfzm8\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.399625 4990 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.399633 4990 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.407139 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-config-data" (OuterVolumeSpecName: "config-data") pod "f6ac6fb7-9b42-49a0-af39-7d3d2f507781" (UID: "f6ac6fb7-9b42-49a0-af39-7d3d2f507781"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.444522 4990 generic.go:334] "Generic (PLEG): container finished" podID="f6ac6fb7-9b42-49a0-af39-7d3d2f507781" containerID="d126904099e4ca683ebce0f47028c10b2372893d2ee437219029228f94a2aa6b" exitCode=0 Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.444619 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.444628 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f6ac6fb7-9b42-49a0-af39-7d3d2f507781","Type":"ContainerDied","Data":"d126904099e4ca683ebce0f47028c10b2372893d2ee437219029228f94a2aa6b"} Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.444969 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f6ac6fb7-9b42-49a0-af39-7d3d2f507781","Type":"ContainerDied","Data":"4dce1b97b439b3059d0ab51e852eb76a1ba1e334604cc3166a9eda8da28a46d4"} Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.444989 4990 scope.go:117] "RemoveContainer" containerID="10bba521aeb9b675eb43e056667bd466b86323a66629d9abc6bec51bc51bcc6d" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.471161 4990 scope.go:117] "RemoveContainer" containerID="fe871536800ec91d7b0c48f69f920a7afbce179b56f000d7fddc56d0208a228d" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.503412 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6ac6fb7-9b42-49a0-af39-7d3d2f507781-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.507744 4990 scope.go:117] "RemoveContainer" containerID="d126904099e4ca683ebce0f47028c10b2372893d2ee437219029228f94a2aa6b" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.515969 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.528241 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.537530 4990 scope.go:117] "RemoveContainer" containerID="c6dea0f524a22a2afdf7bb011bb248fddaf9b199da3864038fd0ae5a2ede6855" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.537975 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:33:34 crc kubenswrapper[4990]: E1205 01:33:34.538403 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ac6fb7-9b42-49a0-af39-7d3d2f507781" containerName="ceilometer-central-agent" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.538423 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ac6fb7-9b42-49a0-af39-7d3d2f507781" containerName="ceilometer-central-agent" Dec 05 01:33:34 crc kubenswrapper[4990]: E1205 01:33:34.538448 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ac6fb7-9b42-49a0-af39-7d3d2f507781" containerName="sg-core" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.538455 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ac6fb7-9b42-49a0-af39-7d3d2f507781" containerName="sg-core" Dec 05 01:33:34 crc kubenswrapper[4990]: E1205 01:33:34.538469 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ac6fb7-9b42-49a0-af39-7d3d2f507781" containerName="proxy-httpd" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.538537 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ac6fb7-9b42-49a0-af39-7d3d2f507781" containerName="proxy-httpd" Dec 05 01:33:34 crc kubenswrapper[4990]: E1205 01:33:34.538555 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ac6fb7-9b42-49a0-af39-7d3d2f507781" containerName="ceilometer-notification-agent" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.538561 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ac6fb7-9b42-49a0-af39-7d3d2f507781" containerName="ceilometer-notification-agent" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.538730 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6ac6fb7-9b42-49a0-af39-7d3d2f507781" containerName="ceilometer-notification-agent" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.538746 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6ac6fb7-9b42-49a0-af39-7d3d2f507781" containerName="sg-core" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.538758 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6ac6fb7-9b42-49a0-af39-7d3d2f507781" containerName="ceilometer-central-agent" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.538771 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6ac6fb7-9b42-49a0-af39-7d3d2f507781" containerName="proxy-httpd" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.540354 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.545252 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.545579 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.548053 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.566368 4990 scope.go:117] "RemoveContainer" containerID="10bba521aeb9b675eb43e056667bd466b86323a66629d9abc6bec51bc51bcc6d" Dec 05 01:33:34 crc kubenswrapper[4990]: E1205 01:33:34.567237 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10bba521aeb9b675eb43e056667bd466b86323a66629d9abc6bec51bc51bcc6d\": container with ID starting with 10bba521aeb9b675eb43e056667bd466b86323a66629d9abc6bec51bc51bcc6d not found: ID does not exist" containerID="10bba521aeb9b675eb43e056667bd466b86323a66629d9abc6bec51bc51bcc6d" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.567292 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10bba521aeb9b675eb43e056667bd466b86323a66629d9abc6bec51bc51bcc6d"} err="failed to get container status \"10bba521aeb9b675eb43e056667bd466b86323a66629d9abc6bec51bc51bcc6d\": rpc error: code = NotFound desc = could not find container \"10bba521aeb9b675eb43e056667bd466b86323a66629d9abc6bec51bc51bcc6d\": container with ID starting with 10bba521aeb9b675eb43e056667bd466b86323a66629d9abc6bec51bc51bcc6d not found: ID does not exist" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.567331 4990 scope.go:117] "RemoveContainer" containerID="fe871536800ec91d7b0c48f69f920a7afbce179b56f000d7fddc56d0208a228d" Dec 05 01:33:34 crc kubenswrapper[4990]: E1205 01:33:34.567914 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe871536800ec91d7b0c48f69f920a7afbce179b56f000d7fddc56d0208a228d\": container with ID starting with fe871536800ec91d7b0c48f69f920a7afbce179b56f000d7fddc56d0208a228d not found: ID does not exist" containerID="fe871536800ec91d7b0c48f69f920a7afbce179b56f000d7fddc56d0208a228d" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.567991 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe871536800ec91d7b0c48f69f920a7afbce179b56f000d7fddc56d0208a228d"} err="failed to get container status \"fe871536800ec91d7b0c48f69f920a7afbce179b56f000d7fddc56d0208a228d\": rpc error: code = NotFound desc = could not find container \"fe871536800ec91d7b0c48f69f920a7afbce179b56f000d7fddc56d0208a228d\": container with ID starting with fe871536800ec91d7b0c48f69f920a7afbce179b56f000d7fddc56d0208a228d not found: ID does not exist" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.568024 4990 scope.go:117] "RemoveContainer" containerID="d126904099e4ca683ebce0f47028c10b2372893d2ee437219029228f94a2aa6b" Dec 05 01:33:34 crc kubenswrapper[4990]: E1205 01:33:34.568383 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d126904099e4ca683ebce0f47028c10b2372893d2ee437219029228f94a2aa6b\": container with ID starting with d126904099e4ca683ebce0f47028c10b2372893d2ee437219029228f94a2aa6b not found: ID does not exist" containerID="d126904099e4ca683ebce0f47028c10b2372893d2ee437219029228f94a2aa6b" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.568414 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d126904099e4ca683ebce0f47028c10b2372893d2ee437219029228f94a2aa6b"} err="failed to get container status \"d126904099e4ca683ebce0f47028c10b2372893d2ee437219029228f94a2aa6b\": rpc error: code = NotFound desc = could not find container \"d126904099e4ca683ebce0f47028c10b2372893d2ee437219029228f94a2aa6b\": container with ID starting with d126904099e4ca683ebce0f47028c10b2372893d2ee437219029228f94a2aa6b not found: ID does not exist" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.568440 4990 scope.go:117] "RemoveContainer" containerID="c6dea0f524a22a2afdf7bb011bb248fddaf9b199da3864038fd0ae5a2ede6855" Dec 05 01:33:34 crc kubenswrapper[4990]: E1205 01:33:34.568773 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6dea0f524a22a2afdf7bb011bb248fddaf9b199da3864038fd0ae5a2ede6855\": container with ID starting with c6dea0f524a22a2afdf7bb011bb248fddaf9b199da3864038fd0ae5a2ede6855 not found: ID does not exist" containerID="c6dea0f524a22a2afdf7bb011bb248fddaf9b199da3864038fd0ae5a2ede6855" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.568808 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6dea0f524a22a2afdf7bb011bb248fddaf9b199da3864038fd0ae5a2ede6855"} err="failed to get container status \"c6dea0f524a22a2afdf7bb011bb248fddaf9b199da3864038fd0ae5a2ede6855\": rpc error: code = NotFound desc = could not find container \"c6dea0f524a22a2afdf7bb011bb248fddaf9b199da3864038fd0ae5a2ede6855\": container with ID starting with c6dea0f524a22a2afdf7bb011bb248fddaf9b199da3864038fd0ae5a2ede6855 not found: ID does not exist" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.706686 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cac5c756-5ef1-4022-b7d1-0ffab174325e-config-data\") pod \"ceilometer-0\" (UID: \"cac5c756-5ef1-4022-b7d1-0ffab174325e\") " pod="openstack/ceilometer-0" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.706760 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cac5c756-5ef1-4022-b7d1-0ffab174325e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cac5c756-5ef1-4022-b7d1-0ffab174325e\") " pod="openstack/ceilometer-0" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.706813 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cac5c756-5ef1-4022-b7d1-0ffab174325e-scripts\") pod \"ceilometer-0\" (UID: \"cac5c756-5ef1-4022-b7d1-0ffab174325e\") " pod="openstack/ceilometer-0" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.706858 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cac5c756-5ef1-4022-b7d1-0ffab174325e-run-httpd\") pod \"ceilometer-0\" (UID: \"cac5c756-5ef1-4022-b7d1-0ffab174325e\") " pod="openstack/ceilometer-0" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.706874 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cac5c756-5ef1-4022-b7d1-0ffab174325e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cac5c756-5ef1-4022-b7d1-0ffab174325e\") " pod="openstack/ceilometer-0" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.706892 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v5jh\" (UniqueName: \"kubernetes.io/projected/cac5c756-5ef1-4022-b7d1-0ffab174325e-kube-api-access-6v5jh\") pod \"ceilometer-0\" (UID: \"cac5c756-5ef1-4022-b7d1-0ffab174325e\") " pod="openstack/ceilometer-0" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.706917 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cac5c756-5ef1-4022-b7d1-0ffab174325e-log-httpd\") pod \"ceilometer-0\" (UID: \"cac5c756-5ef1-4022-b7d1-0ffab174325e\") " pod="openstack/ceilometer-0" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.808240 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cac5c756-5ef1-4022-b7d1-0ffab174325e-scripts\") pod \"ceilometer-0\" (UID: \"cac5c756-5ef1-4022-b7d1-0ffab174325e\") " pod="openstack/ceilometer-0" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.808350 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cac5c756-5ef1-4022-b7d1-0ffab174325e-run-httpd\") pod \"ceilometer-0\" (UID: \"cac5c756-5ef1-4022-b7d1-0ffab174325e\") " pod="openstack/ceilometer-0" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.808376 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cac5c756-5ef1-4022-b7d1-0ffab174325e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cac5c756-5ef1-4022-b7d1-0ffab174325e\") " pod="openstack/ceilometer-0" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.808404 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v5jh\" (UniqueName: \"kubernetes.io/projected/cac5c756-5ef1-4022-b7d1-0ffab174325e-kube-api-access-6v5jh\") pod \"ceilometer-0\" (UID: \"cac5c756-5ef1-4022-b7d1-0ffab174325e\") " pod="openstack/ceilometer-0" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.808443 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cac5c756-5ef1-4022-b7d1-0ffab174325e-log-httpd\") pod \"ceilometer-0\" (UID: \"cac5c756-5ef1-4022-b7d1-0ffab174325e\") " pod="openstack/ceilometer-0" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.808515 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cac5c756-5ef1-4022-b7d1-0ffab174325e-config-data\") pod \"ceilometer-0\" (UID: \"cac5c756-5ef1-4022-b7d1-0ffab174325e\") " pod="openstack/ceilometer-0" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.808567 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cac5c756-5ef1-4022-b7d1-0ffab174325e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cac5c756-5ef1-4022-b7d1-0ffab174325e\") " pod="openstack/ceilometer-0" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.809019 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cac5c756-5ef1-4022-b7d1-0ffab174325e-run-httpd\") pod \"ceilometer-0\" (UID: \"cac5c756-5ef1-4022-b7d1-0ffab174325e\") " pod="openstack/ceilometer-0" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.809110 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cac5c756-5ef1-4022-b7d1-0ffab174325e-log-httpd\") pod \"ceilometer-0\" (UID: \"cac5c756-5ef1-4022-b7d1-0ffab174325e\") " pod="openstack/ceilometer-0" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.817405 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cac5c756-5ef1-4022-b7d1-0ffab174325e-config-data\") pod \"ceilometer-0\" (UID: \"cac5c756-5ef1-4022-b7d1-0ffab174325e\") " pod="openstack/ceilometer-0" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.821414 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cac5c756-5ef1-4022-b7d1-0ffab174325e-scripts\") pod \"ceilometer-0\" (UID: \"cac5c756-5ef1-4022-b7d1-0ffab174325e\") " pod="openstack/ceilometer-0" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.826431 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cac5c756-5ef1-4022-b7d1-0ffab174325e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cac5c756-5ef1-4022-b7d1-0ffab174325e\") " pod="openstack/ceilometer-0" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.830830 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cac5c756-5ef1-4022-b7d1-0ffab174325e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cac5c756-5ef1-4022-b7d1-0ffab174325e\") " pod="openstack/ceilometer-0" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.834493 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v5jh\" (UniqueName: \"kubernetes.io/projected/cac5c756-5ef1-4022-b7d1-0ffab174325e-kube-api-access-6v5jh\") pod \"ceilometer-0\" (UID: \"cac5c756-5ef1-4022-b7d1-0ffab174325e\") " pod="openstack/ceilometer-0" Dec 05 01:33:34 crc kubenswrapper[4990]: I1205 01:33:34.863915 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 01:33:35 crc kubenswrapper[4990]: I1205 01:33:35.342275 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:33:35 crc kubenswrapper[4990]: I1205 01:33:35.349961 4990 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 01:33:35 crc kubenswrapper[4990]: I1205 01:33:35.455549 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cac5c756-5ef1-4022-b7d1-0ffab174325e","Type":"ContainerStarted","Data":"d098dfc802727c0ef1fbda59b0cd5fd13cef62fe95715a8a21dfcedf4c9c78bc"} Dec 05 01:33:35 crc kubenswrapper[4990]: I1205 01:33:35.940745 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6ac6fb7-9b42-49a0-af39-7d3d2f507781" path="/var/lib/kubelet/pods/f6ac6fb7-9b42-49a0-af39-7d3d2f507781/volumes" Dec 05 01:33:36 crc kubenswrapper[4990]: I1205 01:33:36.469087 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cac5c756-5ef1-4022-b7d1-0ffab174325e","Type":"ContainerStarted","Data":"d01af1a3c6cf88e1e5ce1192eba05bc64d29ef2e0ecb9035634c11011cb3c8c0"} Dec 05 01:33:37 crc kubenswrapper[4990]: I1205 01:33:37.481382 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cac5c756-5ef1-4022-b7d1-0ffab174325e","Type":"ContainerStarted","Data":"77cde06b98d6a489aa6358ccef160cde6ed889dfe54bde7720640d2619a87b92"} Dec 05 01:33:37 crc kubenswrapper[4990]: I1205 01:33:37.481782 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cac5c756-5ef1-4022-b7d1-0ffab174325e","Type":"ContainerStarted","Data":"dadd727ac26ef04c99b55cbedf347580a393396ead81d6e085059580daec6073"} Dec 05 01:33:38 crc kubenswrapper[4990]: I1205 01:33:38.497087 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cac5c756-5ef1-4022-b7d1-0ffab174325e","Type":"ContainerStarted","Data":"84934310e34288c78c6c523feef294486047f681fcfe2f5cc82e51e46dcb397b"} Dec 05 01:33:38 crc kubenswrapper[4990]: I1205 01:33:38.497627 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 01:33:38 crc kubenswrapper[4990]: I1205 01:33:38.521741 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.650212887 podStartE2EDuration="4.521720735s" podCreationTimestamp="2025-12-05 01:33:34 +0000 UTC" firstStartedPulling="2025-12-05 01:33:35.34967839 +0000 UTC m=+1513.725893751" lastFinishedPulling="2025-12-05 01:33:38.221186238 +0000 UTC m=+1516.597401599" observedRunningTime="2025-12-05 01:33:38.514610043 +0000 UTC m=+1516.890825434" watchObservedRunningTime="2025-12-05 01:33:38.521720735 +0000 UTC m=+1516.897936096" Dec 05 01:33:39 crc kubenswrapper[4990]: I1205 01:33:39.505448 4990 generic.go:334] "Generic (PLEG): container finished" podID="74ff5181-d9ea-4726-97f0-3d62935f4949" containerID="84eda9e236cf04f0c78610e9e447f88ae6f7b12e26fe33aa20954f632cbbdd2c" exitCode=0 Dec 05 01:33:39 crc kubenswrapper[4990]: I1205 01:33:39.505636 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gmxzd" event={"ID":"74ff5181-d9ea-4726-97f0-3d62935f4949","Type":"ContainerDied","Data":"84eda9e236cf04f0c78610e9e447f88ae6f7b12e26fe33aa20954f632cbbdd2c"} Dec 05 01:33:40 crc kubenswrapper[4990]: I1205 01:33:40.950285 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gmxzd" Dec 05 01:33:41 crc kubenswrapper[4990]: I1205 01:33:41.127621 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74ff5181-d9ea-4726-97f0-3d62935f4949-combined-ca-bundle\") pod \"74ff5181-d9ea-4726-97f0-3d62935f4949\" (UID: \"74ff5181-d9ea-4726-97f0-3d62935f4949\") " Dec 05 01:33:41 crc kubenswrapper[4990]: I1205 01:33:41.127960 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwcnt\" (UniqueName: \"kubernetes.io/projected/74ff5181-d9ea-4726-97f0-3d62935f4949-kube-api-access-rwcnt\") pod \"74ff5181-d9ea-4726-97f0-3d62935f4949\" (UID: \"74ff5181-d9ea-4726-97f0-3d62935f4949\") " Dec 05 01:33:41 crc kubenswrapper[4990]: I1205 01:33:41.128143 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74ff5181-d9ea-4726-97f0-3d62935f4949-config-data\") pod \"74ff5181-d9ea-4726-97f0-3d62935f4949\" (UID: \"74ff5181-d9ea-4726-97f0-3d62935f4949\") " Dec 05 01:33:41 crc kubenswrapper[4990]: I1205 01:33:41.128287 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74ff5181-d9ea-4726-97f0-3d62935f4949-scripts\") pod \"74ff5181-d9ea-4726-97f0-3d62935f4949\" (UID: \"74ff5181-d9ea-4726-97f0-3d62935f4949\") " Dec 05 01:33:41 crc kubenswrapper[4990]: I1205 01:33:41.133218 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74ff5181-d9ea-4726-97f0-3d62935f4949-scripts" (OuterVolumeSpecName: "scripts") pod "74ff5181-d9ea-4726-97f0-3d62935f4949" (UID: "74ff5181-d9ea-4726-97f0-3d62935f4949"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:33:41 crc kubenswrapper[4990]: I1205 01:33:41.135265 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74ff5181-d9ea-4726-97f0-3d62935f4949-kube-api-access-rwcnt" (OuterVolumeSpecName: "kube-api-access-rwcnt") pod "74ff5181-d9ea-4726-97f0-3d62935f4949" (UID: "74ff5181-d9ea-4726-97f0-3d62935f4949"). InnerVolumeSpecName "kube-api-access-rwcnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:33:41 crc kubenswrapper[4990]: I1205 01:33:41.165964 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74ff5181-d9ea-4726-97f0-3d62935f4949-config-data" (OuterVolumeSpecName: "config-data") pod "74ff5181-d9ea-4726-97f0-3d62935f4949" (UID: "74ff5181-d9ea-4726-97f0-3d62935f4949"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:33:41 crc kubenswrapper[4990]: I1205 01:33:41.181759 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74ff5181-d9ea-4726-97f0-3d62935f4949-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74ff5181-d9ea-4726-97f0-3d62935f4949" (UID: "74ff5181-d9ea-4726-97f0-3d62935f4949"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:33:41 crc kubenswrapper[4990]: I1205 01:33:41.230707 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74ff5181-d9ea-4726-97f0-3d62935f4949-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:41 crc kubenswrapper[4990]: I1205 01:33:41.230748 4990 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74ff5181-d9ea-4726-97f0-3d62935f4949-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:41 crc kubenswrapper[4990]: I1205 01:33:41.230766 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74ff5181-d9ea-4726-97f0-3d62935f4949-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:41 crc kubenswrapper[4990]: I1205 01:33:41.230786 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwcnt\" (UniqueName: \"kubernetes.io/projected/74ff5181-d9ea-4726-97f0-3d62935f4949-kube-api-access-rwcnt\") on node \"crc\" DevicePath \"\"" Dec 05 01:33:41 crc kubenswrapper[4990]: I1205 01:33:41.529976 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gmxzd" event={"ID":"74ff5181-d9ea-4726-97f0-3d62935f4949","Type":"ContainerDied","Data":"b817e06226e76ce9a34c88a436ce03adbf3be31fce4808191a48cc7a7c2c1c04"} Dec 05 01:33:41 crc kubenswrapper[4990]: I1205 01:33:41.530011 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b817e06226e76ce9a34c88a436ce03adbf3be31fce4808191a48cc7a7c2c1c04" Dec 05 01:33:41 crc kubenswrapper[4990]: I1205 01:33:41.530068 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gmxzd" Dec 05 01:33:41 crc kubenswrapper[4990]: I1205 01:33:41.661015 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 05 01:33:41 crc kubenswrapper[4990]: E1205 01:33:41.661697 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74ff5181-d9ea-4726-97f0-3d62935f4949" containerName="nova-cell0-conductor-db-sync" Dec 05 01:33:41 crc kubenswrapper[4990]: I1205 01:33:41.661801 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="74ff5181-d9ea-4726-97f0-3d62935f4949" containerName="nova-cell0-conductor-db-sync" Dec 05 01:33:41 crc kubenswrapper[4990]: I1205 01:33:41.662146 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="74ff5181-d9ea-4726-97f0-3d62935f4949" containerName="nova-cell0-conductor-db-sync" Dec 05 01:33:41 crc kubenswrapper[4990]: I1205 01:33:41.662900 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 05 01:33:41 crc kubenswrapper[4990]: I1205 01:33:41.666373 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 05 01:33:41 crc kubenswrapper[4990]: I1205 01:33:41.668816 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-w5ffx" Dec 05 01:33:41 crc kubenswrapper[4990]: I1205 01:33:41.679124 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 05 01:33:41 crc kubenswrapper[4990]: I1205 01:33:41.739340 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426a0569-3dcd-4f28-9556-d4be5f1bdc18-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"426a0569-3dcd-4f28-9556-d4be5f1bdc18\") " pod="openstack/nova-cell0-conductor-0" Dec 05 01:33:41 crc kubenswrapper[4990]: I1205 01:33:41.739452 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426a0569-3dcd-4f28-9556-d4be5f1bdc18-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"426a0569-3dcd-4f28-9556-d4be5f1bdc18\") " pod="openstack/nova-cell0-conductor-0" Dec 05 01:33:41 crc kubenswrapper[4990]: I1205 01:33:41.739547 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9xzj\" (UniqueName: \"kubernetes.io/projected/426a0569-3dcd-4f28-9556-d4be5f1bdc18-kube-api-access-b9xzj\") pod \"nova-cell0-conductor-0\" (UID: \"426a0569-3dcd-4f28-9556-d4be5f1bdc18\") " pod="openstack/nova-cell0-conductor-0" Dec 05 01:33:41 crc kubenswrapper[4990]: I1205 01:33:41.841336 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9xzj\" (UniqueName: \"kubernetes.io/projected/426a0569-3dcd-4f28-9556-d4be5f1bdc18-kube-api-access-b9xzj\") pod \"nova-cell0-conductor-0\" (UID: \"426a0569-3dcd-4f28-9556-d4be5f1bdc18\") " pod="openstack/nova-cell0-conductor-0" Dec 05 01:33:41 crc kubenswrapper[4990]: I1205 01:33:41.841799 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426a0569-3dcd-4f28-9556-d4be5f1bdc18-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"426a0569-3dcd-4f28-9556-d4be5f1bdc18\") " pod="openstack/nova-cell0-conductor-0" Dec 05 01:33:41 crc kubenswrapper[4990]: I1205 01:33:41.841841 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426a0569-3dcd-4f28-9556-d4be5f1bdc18-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"426a0569-3dcd-4f28-9556-d4be5f1bdc18\") " pod="openstack/nova-cell0-conductor-0" Dec 05 01:33:41 crc kubenswrapper[4990]: I1205 01:33:41.848172 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426a0569-3dcd-4f28-9556-d4be5f1bdc18-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"426a0569-3dcd-4f28-9556-d4be5f1bdc18\") " pod="openstack/nova-cell0-conductor-0" Dec 05 01:33:41 crc kubenswrapper[4990]: I1205 01:33:41.851226 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426a0569-3dcd-4f28-9556-d4be5f1bdc18-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"426a0569-3dcd-4f28-9556-d4be5f1bdc18\") " pod="openstack/nova-cell0-conductor-0" Dec 05 01:33:41 crc kubenswrapper[4990]: I1205 01:33:41.866360 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9xzj\" (UniqueName: \"kubernetes.io/projected/426a0569-3dcd-4f28-9556-d4be5f1bdc18-kube-api-access-b9xzj\") pod \"nova-cell0-conductor-0\" (UID: \"426a0569-3dcd-4f28-9556-d4be5f1bdc18\") " pod="openstack/nova-cell0-conductor-0" Dec 05 01:33:41 crc kubenswrapper[4990]: I1205 01:33:41.987965 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 05 01:33:42 crc kubenswrapper[4990]: I1205 01:33:42.481301 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 05 01:33:42 crc kubenswrapper[4990]: I1205 01:33:42.540093 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"426a0569-3dcd-4f28-9556-d4be5f1bdc18","Type":"ContainerStarted","Data":"c205db4ebc5778e52009e312ac021ae0ac38f74fd09112d5bace0726e992ee93"} Dec 05 01:33:43 crc kubenswrapper[4990]: I1205 01:33:43.550110 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"426a0569-3dcd-4f28-9556-d4be5f1bdc18","Type":"ContainerStarted","Data":"4c1a4d82fc529e57f60fa033b1bfcb08aa397f1149fd6f70c841640e8abdbde8"} Dec 05 01:33:43 crc kubenswrapper[4990]: I1205 01:33:43.550560 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 05 01:33:43 crc kubenswrapper[4990]: I1205 01:33:43.569156 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.5691383119999998 podStartE2EDuration="2.569138312s" podCreationTimestamp="2025-12-05 01:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:33:43.564795539 +0000 UTC m=+1521.941010910" watchObservedRunningTime="2025-12-05 01:33:43.569138312 +0000 UTC m=+1521.945353683" Dec 05 01:33:44 crc kubenswrapper[4990]: I1205 01:33:44.139774 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6bt8p"] Dec 05 01:33:44 crc kubenswrapper[4990]: I1205 01:33:44.143177 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6bt8p" Dec 05 01:33:44 crc kubenswrapper[4990]: I1205 01:33:44.152275 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6bt8p"] Dec 05 01:33:44 crc kubenswrapper[4990]: I1205 01:33:44.295090 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3379d337-9df6-4536-a98e-39a08d690d9f-utilities\") pod \"redhat-operators-6bt8p\" (UID: \"3379d337-9df6-4536-a98e-39a08d690d9f\") " pod="openshift-marketplace/redhat-operators-6bt8p" Dec 05 01:33:44 crc kubenswrapper[4990]: I1205 01:33:44.295219 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctjvm\" (UniqueName: \"kubernetes.io/projected/3379d337-9df6-4536-a98e-39a08d690d9f-kube-api-access-ctjvm\") pod \"redhat-operators-6bt8p\" (UID: \"3379d337-9df6-4536-a98e-39a08d690d9f\") " pod="openshift-marketplace/redhat-operators-6bt8p" Dec 05 01:33:44 crc kubenswrapper[4990]: I1205 01:33:44.295569 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3379d337-9df6-4536-a98e-39a08d690d9f-catalog-content\") pod \"redhat-operators-6bt8p\" (UID: \"3379d337-9df6-4536-a98e-39a08d690d9f\") " pod="openshift-marketplace/redhat-operators-6bt8p" Dec 05 01:33:44 crc kubenswrapper[4990]: I1205 01:33:44.397001 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3379d337-9df6-4536-a98e-39a08d690d9f-catalog-content\") pod \"redhat-operators-6bt8p\" (UID: \"3379d337-9df6-4536-a98e-39a08d690d9f\") " pod="openshift-marketplace/redhat-operators-6bt8p" Dec 05 01:33:44 crc kubenswrapper[4990]: I1205 01:33:44.397084 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3379d337-9df6-4536-a98e-39a08d690d9f-utilities\") pod \"redhat-operators-6bt8p\" (UID: \"3379d337-9df6-4536-a98e-39a08d690d9f\") " pod="openshift-marketplace/redhat-operators-6bt8p" Dec 05 01:33:44 crc kubenswrapper[4990]: I1205 01:33:44.397128 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctjvm\" (UniqueName: \"kubernetes.io/projected/3379d337-9df6-4536-a98e-39a08d690d9f-kube-api-access-ctjvm\") pod \"redhat-operators-6bt8p\" (UID: \"3379d337-9df6-4536-a98e-39a08d690d9f\") " pod="openshift-marketplace/redhat-operators-6bt8p" Dec 05 01:33:44 crc kubenswrapper[4990]: I1205 01:33:44.398112 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3379d337-9df6-4536-a98e-39a08d690d9f-utilities\") pod \"redhat-operators-6bt8p\" (UID: \"3379d337-9df6-4536-a98e-39a08d690d9f\") " pod="openshift-marketplace/redhat-operators-6bt8p" Dec 05 01:33:44 crc kubenswrapper[4990]: I1205 01:33:44.398144 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3379d337-9df6-4536-a98e-39a08d690d9f-catalog-content\") pod \"redhat-operators-6bt8p\" (UID: \"3379d337-9df6-4536-a98e-39a08d690d9f\") " pod="openshift-marketplace/redhat-operators-6bt8p" Dec 05 01:33:44 crc kubenswrapper[4990]: I1205 01:33:44.417670 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctjvm\" (UniqueName: \"kubernetes.io/projected/3379d337-9df6-4536-a98e-39a08d690d9f-kube-api-access-ctjvm\") pod \"redhat-operators-6bt8p\" (UID: \"3379d337-9df6-4536-a98e-39a08d690d9f\") " pod="openshift-marketplace/redhat-operators-6bt8p" Dec 05 01:33:44 crc kubenswrapper[4990]: I1205 01:33:44.486834 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6bt8p" Dec 05 01:33:44 crc kubenswrapper[4990]: I1205 01:33:44.938974 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6bt8p"] Dec 05 01:33:45 crc kubenswrapper[4990]: I1205 01:33:45.575087 4990 generic.go:334] "Generic (PLEG): container finished" podID="3379d337-9df6-4536-a98e-39a08d690d9f" containerID="c8bcdd217446bc3c574e5aac59cf99bd5164431c013708b1eec38c51290f2f75" exitCode=0 Dec 05 01:33:45 crc kubenswrapper[4990]: I1205 01:33:45.575179 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6bt8p" event={"ID":"3379d337-9df6-4536-a98e-39a08d690d9f","Type":"ContainerDied","Data":"c8bcdd217446bc3c574e5aac59cf99bd5164431c013708b1eec38c51290f2f75"} Dec 05 01:33:45 crc kubenswrapper[4990]: I1205 01:33:45.575537 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6bt8p" event={"ID":"3379d337-9df6-4536-a98e-39a08d690d9f","Type":"ContainerStarted","Data":"63a846c73fd947f19f7a2f7618707806935b2a5e2ab552aa37fb8ea945673a4f"} Dec 05 01:33:46 crc kubenswrapper[4990]: I1205 01:33:46.585322 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6bt8p" event={"ID":"3379d337-9df6-4536-a98e-39a08d690d9f","Type":"ContainerStarted","Data":"f8ab03bd37dfc945fec438eca4e12f55ae500de982392c9e451cb5689a78f5f7"} Dec 05 01:33:49 crc kubenswrapper[4990]: I1205 01:33:49.621055 4990 generic.go:334] "Generic (PLEG): container finished" podID="3379d337-9df6-4536-a98e-39a08d690d9f" containerID="f8ab03bd37dfc945fec438eca4e12f55ae500de982392c9e451cb5689a78f5f7" exitCode=0 Dec 05 01:33:49 crc kubenswrapper[4990]: I1205 01:33:49.621340 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6bt8p" event={"ID":"3379d337-9df6-4536-a98e-39a08d690d9f","Type":"ContainerDied","Data":"f8ab03bd37dfc945fec438eca4e12f55ae500de982392c9e451cb5689a78f5f7"} Dec 05 01:33:50 crc kubenswrapper[4990]: I1205 01:33:50.637589 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6bt8p" event={"ID":"3379d337-9df6-4536-a98e-39a08d690d9f","Type":"ContainerStarted","Data":"f89daa114b815dc0d9e25d0db39c10b967782ac4c8249ffc44b4931046abef68"} Dec 05 01:33:50 crc kubenswrapper[4990]: I1205 01:33:50.674462 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6bt8p" podStartSLOduration=2.006322637 podStartE2EDuration="6.674440561s" podCreationTimestamp="2025-12-05 01:33:44 +0000 UTC" firstStartedPulling="2025-12-05 01:33:45.578745434 +0000 UTC m=+1523.954960795" lastFinishedPulling="2025-12-05 01:33:50.246863318 +0000 UTC m=+1528.623078719" observedRunningTime="2025-12-05 01:33:50.663296145 +0000 UTC m=+1529.039511516" watchObservedRunningTime="2025-12-05 01:33:50.674440561 +0000 UTC m=+1529.050655942" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.015181 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.443453 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-tqbr8"] Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.444558 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-tqbr8" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.446854 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.446955 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.465602 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-tqbr8"] Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.554825 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9tz5\" (UniqueName: \"kubernetes.io/projected/11217722-8e69-4028-a7c3-036cfdefcb77-kube-api-access-c9tz5\") pod \"nova-cell0-cell-mapping-tqbr8\" (UID: \"11217722-8e69-4028-a7c3-036cfdefcb77\") " pod="openstack/nova-cell0-cell-mapping-tqbr8" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.555113 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11217722-8e69-4028-a7c3-036cfdefcb77-config-data\") pod \"nova-cell0-cell-mapping-tqbr8\" (UID: \"11217722-8e69-4028-a7c3-036cfdefcb77\") " pod="openstack/nova-cell0-cell-mapping-tqbr8" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.555254 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11217722-8e69-4028-a7c3-036cfdefcb77-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-tqbr8\" (UID: \"11217722-8e69-4028-a7c3-036cfdefcb77\") " pod="openstack/nova-cell0-cell-mapping-tqbr8" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.555311 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11217722-8e69-4028-a7c3-036cfdefcb77-scripts\") pod \"nova-cell0-cell-mapping-tqbr8\" (UID: \"11217722-8e69-4028-a7c3-036cfdefcb77\") " pod="openstack/nova-cell0-cell-mapping-tqbr8" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.657165 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11217722-8e69-4028-a7c3-036cfdefcb77-scripts\") pod \"nova-cell0-cell-mapping-tqbr8\" (UID: \"11217722-8e69-4028-a7c3-036cfdefcb77\") " pod="openstack/nova-cell0-cell-mapping-tqbr8" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.657232 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9tz5\" (UniqueName: \"kubernetes.io/projected/11217722-8e69-4028-a7c3-036cfdefcb77-kube-api-access-c9tz5\") pod \"nova-cell0-cell-mapping-tqbr8\" (UID: \"11217722-8e69-4028-a7c3-036cfdefcb77\") " pod="openstack/nova-cell0-cell-mapping-tqbr8" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.657336 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11217722-8e69-4028-a7c3-036cfdefcb77-config-data\") pod \"nova-cell0-cell-mapping-tqbr8\" (UID: \"11217722-8e69-4028-a7c3-036cfdefcb77\") " pod="openstack/nova-cell0-cell-mapping-tqbr8" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.657432 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11217722-8e69-4028-a7c3-036cfdefcb77-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-tqbr8\" (UID: \"11217722-8e69-4028-a7c3-036cfdefcb77\") " pod="openstack/nova-cell0-cell-mapping-tqbr8" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.664355 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11217722-8e69-4028-a7c3-036cfdefcb77-scripts\") pod \"nova-cell0-cell-mapping-tqbr8\" (UID: \"11217722-8e69-4028-a7c3-036cfdefcb77\") " pod="openstack/nova-cell0-cell-mapping-tqbr8" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.666699 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11217722-8e69-4028-a7c3-036cfdefcb77-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-tqbr8\" (UID: \"11217722-8e69-4028-a7c3-036cfdefcb77\") " pod="openstack/nova-cell0-cell-mapping-tqbr8" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.668128 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11217722-8e69-4028-a7c3-036cfdefcb77-config-data\") pod \"nova-cell0-cell-mapping-tqbr8\" (UID: \"11217722-8e69-4028-a7c3-036cfdefcb77\") " pod="openstack/nova-cell0-cell-mapping-tqbr8" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.671350 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.679968 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.687448 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.706135 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9tz5\" (UniqueName: \"kubernetes.io/projected/11217722-8e69-4028-a7c3-036cfdefcb77-kube-api-access-c9tz5\") pod \"nova-cell0-cell-mapping-tqbr8\" (UID: \"11217722-8e69-4028-a7c3-036cfdefcb77\") " pod="openstack/nova-cell0-cell-mapping-tqbr8" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.721622 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.723117 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.740167 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.741797 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.783850 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-tqbr8" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.811312 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.827689 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb674288-65cb-4d21-ae86-24f620603203-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bb674288-65cb-4d21-ae86-24f620603203\") " pod="openstack/nova-metadata-0" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.827796 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb674288-65cb-4d21-ae86-24f620603203-logs\") pod \"nova-metadata-0\" (UID: \"bb674288-65cb-4d21-ae86-24f620603203\") " pod="openstack/nova-metadata-0" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.827873 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggk8x\" (UniqueName: \"kubernetes.io/projected/c6f2ac73-99f6-4285-8a33-e59ffa88c462-kube-api-access-ggk8x\") pod \"nova-api-0\" (UID: \"c6f2ac73-99f6-4285-8a33-e59ffa88c462\") " pod="openstack/nova-api-0" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.827957 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6f2ac73-99f6-4285-8a33-e59ffa88c462-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c6f2ac73-99f6-4285-8a33-e59ffa88c462\") " pod="openstack/nova-api-0" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.827991 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6f2ac73-99f6-4285-8a33-e59ffa88c462-config-data\") pod \"nova-api-0\" (UID: \"c6f2ac73-99f6-4285-8a33-e59ffa88c462\") " pod="openstack/nova-api-0" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.828024 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb674288-65cb-4d21-ae86-24f620603203-config-data\") pod \"nova-metadata-0\" (UID: \"bb674288-65cb-4d21-ae86-24f620603203\") " pod="openstack/nova-metadata-0" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.828083 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn6cr\" (UniqueName: \"kubernetes.io/projected/bb674288-65cb-4d21-ae86-24f620603203-kube-api-access-nn6cr\") pod \"nova-metadata-0\" (UID: \"bb674288-65cb-4d21-ae86-24f620603203\") " pod="openstack/nova-metadata-0" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.828113 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6f2ac73-99f6-4285-8a33-e59ffa88c462-logs\") pod \"nova-api-0\" (UID: \"c6f2ac73-99f6-4285-8a33-e59ffa88c462\") " pod="openstack/nova-api-0" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.860195 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.871648 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.877279 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.915355 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-dvjzl"] Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.916897 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-dvjzl" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.923700 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.930660 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b27ca4f7-1e58-4994-bc4f-1a751f08e628-dns-svc\") pod \"dnsmasq-dns-757b4f8459-dvjzl\" (UID: \"b27ca4f7-1e58-4994-bc4f-1a751f08e628\") " pod="openstack/dnsmasq-dns-757b4f8459-dvjzl" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.930703 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6f2ac73-99f6-4285-8a33-e59ffa88c462-config-data\") pod \"nova-api-0\" (UID: \"c6f2ac73-99f6-4285-8a33-e59ffa88c462\") " pod="openstack/nova-api-0" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.930722 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb674288-65cb-4d21-ae86-24f620603203-config-data\") pod \"nova-metadata-0\" (UID: \"bb674288-65cb-4d21-ae86-24f620603203\") " pod="openstack/nova-metadata-0" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.930749 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b27ca4f7-1e58-4994-bc4f-1a751f08e628-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-dvjzl\" (UID: \"b27ca4f7-1e58-4994-bc4f-1a751f08e628\") " pod="openstack/dnsmasq-dns-757b4f8459-dvjzl" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.930767 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b27ca4f7-1e58-4994-bc4f-1a751f08e628-config\") pod \"dnsmasq-dns-757b4f8459-dvjzl\" (UID: \"b27ca4f7-1e58-4994-bc4f-1a751f08e628\") " pod="openstack/dnsmasq-dns-757b4f8459-dvjzl" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.930787 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a37519f4-08c5-407b-9374-0f636d4366fc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a37519f4-08c5-407b-9374-0f636d4366fc\") " pod="openstack/nova-scheduler-0" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.930803 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn6cr\" (UniqueName: \"kubernetes.io/projected/bb674288-65cb-4d21-ae86-24f620603203-kube-api-access-nn6cr\") pod \"nova-metadata-0\" (UID: \"bb674288-65cb-4d21-ae86-24f620603203\") " pod="openstack/nova-metadata-0" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.930823 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6f2ac73-99f6-4285-8a33-e59ffa88c462-logs\") pod \"nova-api-0\" (UID: \"c6f2ac73-99f6-4285-8a33-e59ffa88c462\") " pod="openstack/nova-api-0" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.930849 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b27ca4f7-1e58-4994-bc4f-1a751f08e628-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-dvjzl\" (UID: \"b27ca4f7-1e58-4994-bc4f-1a751f08e628\") " pod="openstack/dnsmasq-dns-757b4f8459-dvjzl" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.930875 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjdvt\" (UniqueName: \"kubernetes.io/projected/b27ca4f7-1e58-4994-bc4f-1a751f08e628-kube-api-access-mjdvt\") pod \"dnsmasq-dns-757b4f8459-dvjzl\" (UID: \"b27ca4f7-1e58-4994-bc4f-1a751f08e628\") " pod="openstack/dnsmasq-dns-757b4f8459-dvjzl" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.930898 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb674288-65cb-4d21-ae86-24f620603203-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bb674288-65cb-4d21-ae86-24f620603203\") " pod="openstack/nova-metadata-0" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.930917 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a37519f4-08c5-407b-9374-0f636d4366fc-config-data\") pod \"nova-scheduler-0\" (UID: \"a37519f4-08c5-407b-9374-0f636d4366fc\") " pod="openstack/nova-scheduler-0" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.930944 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb674288-65cb-4d21-ae86-24f620603203-logs\") pod \"nova-metadata-0\" (UID: \"bb674288-65cb-4d21-ae86-24f620603203\") " pod="openstack/nova-metadata-0" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.930959 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt52z\" (UniqueName: \"kubernetes.io/projected/a37519f4-08c5-407b-9374-0f636d4366fc-kube-api-access-wt52z\") pod \"nova-scheduler-0\" (UID: \"a37519f4-08c5-407b-9374-0f636d4366fc\") " pod="openstack/nova-scheduler-0" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.930992 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggk8x\" (UniqueName: \"kubernetes.io/projected/c6f2ac73-99f6-4285-8a33-e59ffa88c462-kube-api-access-ggk8x\") pod \"nova-api-0\" (UID: \"c6f2ac73-99f6-4285-8a33-e59ffa88c462\") " pod="openstack/nova-api-0" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.931025 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b27ca4f7-1e58-4994-bc4f-1a751f08e628-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-dvjzl\" (UID: \"b27ca4f7-1e58-4994-bc4f-1a751f08e628\") " pod="openstack/dnsmasq-dns-757b4f8459-dvjzl" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.931042 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6f2ac73-99f6-4285-8a33-e59ffa88c462-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c6f2ac73-99f6-4285-8a33-e59ffa88c462\") " pod="openstack/nova-api-0" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.931820 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-dvjzl"] Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.932232 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6f2ac73-99f6-4285-8a33-e59ffa88c462-logs\") pod \"nova-api-0\" (UID: \"c6f2ac73-99f6-4285-8a33-e59ffa88c462\") " pod="openstack/nova-api-0" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.935981 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb674288-65cb-4d21-ae86-24f620603203-logs\") pod \"nova-metadata-0\" (UID: \"bb674288-65cb-4d21-ae86-24f620603203\") " pod="openstack/nova-metadata-0" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.939288 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6f2ac73-99f6-4285-8a33-e59ffa88c462-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c6f2ac73-99f6-4285-8a33-e59ffa88c462\") " pod="openstack/nova-api-0" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.942399 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb674288-65cb-4d21-ae86-24f620603203-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bb674288-65cb-4d21-ae86-24f620603203\") " pod="openstack/nova-metadata-0" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.947139 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb674288-65cb-4d21-ae86-24f620603203-config-data\") pod \"nova-metadata-0\" (UID: \"bb674288-65cb-4d21-ae86-24f620603203\") " pod="openstack/nova-metadata-0" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.947683 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6f2ac73-99f6-4285-8a33-e59ffa88c462-config-data\") pod \"nova-api-0\" (UID: \"c6f2ac73-99f6-4285-8a33-e59ffa88c462\") " pod="openstack/nova-api-0" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.954358 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn6cr\" (UniqueName: \"kubernetes.io/projected/bb674288-65cb-4d21-ae86-24f620603203-kube-api-access-nn6cr\") pod \"nova-metadata-0\" (UID: \"bb674288-65cb-4d21-ae86-24f620603203\") " pod="openstack/nova-metadata-0" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.956420 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggk8x\" (UniqueName: \"kubernetes.io/projected/c6f2ac73-99f6-4285-8a33-e59ffa88c462-kube-api-access-ggk8x\") pod \"nova-api-0\" (UID: \"c6f2ac73-99f6-4285-8a33-e59ffa88c462\") " pod="openstack/nova-api-0" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.956476 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.957634 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.963950 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 05 01:33:52 crc kubenswrapper[4990]: I1205 01:33:52.972099 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.032142 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b27ca4f7-1e58-4994-bc4f-1a751f08e628-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-dvjzl\" (UID: \"b27ca4f7-1e58-4994-bc4f-1a751f08e628\") " pod="openstack/dnsmasq-dns-757b4f8459-dvjzl" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.032567 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjdvt\" (UniqueName: \"kubernetes.io/projected/b27ca4f7-1e58-4994-bc4f-1a751f08e628-kube-api-access-mjdvt\") pod \"dnsmasq-dns-757b4f8459-dvjzl\" (UID: \"b27ca4f7-1e58-4994-bc4f-1a751f08e628\") " pod="openstack/dnsmasq-dns-757b4f8459-dvjzl" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.032605 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a37519f4-08c5-407b-9374-0f636d4366fc-config-data\") pod \"nova-scheduler-0\" (UID: \"a37519f4-08c5-407b-9374-0f636d4366fc\") " pod="openstack/nova-scheduler-0" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.032640 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e42b57-0a6f-4dc0-81b2-729838f73c91-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"78e42b57-0a6f-4dc0-81b2-729838f73c91\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.032658 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt52z\" (UniqueName: \"kubernetes.io/projected/a37519f4-08c5-407b-9374-0f636d4366fc-kube-api-access-wt52z\") pod \"nova-scheduler-0\" (UID: \"a37519f4-08c5-407b-9374-0f636d4366fc\") " pod="openstack/nova-scheduler-0" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.032705 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjl2b\" (UniqueName: \"kubernetes.io/projected/78e42b57-0a6f-4dc0-81b2-729838f73c91-kube-api-access-jjl2b\") pod \"nova-cell1-novncproxy-0\" (UID: \"78e42b57-0a6f-4dc0-81b2-729838f73c91\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.032729 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b27ca4f7-1e58-4994-bc4f-1a751f08e628-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-dvjzl\" (UID: \"b27ca4f7-1e58-4994-bc4f-1a751f08e628\") " pod="openstack/dnsmasq-dns-757b4f8459-dvjzl" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.032748 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b27ca4f7-1e58-4994-bc4f-1a751f08e628-dns-svc\") pod \"dnsmasq-dns-757b4f8459-dvjzl\" (UID: \"b27ca4f7-1e58-4994-bc4f-1a751f08e628\") " pod="openstack/dnsmasq-dns-757b4f8459-dvjzl" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.032783 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b27ca4f7-1e58-4994-bc4f-1a751f08e628-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-dvjzl\" (UID: \"b27ca4f7-1e58-4994-bc4f-1a751f08e628\") " pod="openstack/dnsmasq-dns-757b4f8459-dvjzl" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.032818 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b27ca4f7-1e58-4994-bc4f-1a751f08e628-config\") pod \"dnsmasq-dns-757b4f8459-dvjzl\" (UID: \"b27ca4f7-1e58-4994-bc4f-1a751f08e628\") " pod="openstack/dnsmasq-dns-757b4f8459-dvjzl" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.032837 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a37519f4-08c5-407b-9374-0f636d4366fc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a37519f4-08c5-407b-9374-0f636d4366fc\") " pod="openstack/nova-scheduler-0" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.032859 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e42b57-0a6f-4dc0-81b2-729838f73c91-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"78e42b57-0a6f-4dc0-81b2-729838f73c91\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.033000 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b27ca4f7-1e58-4994-bc4f-1a751f08e628-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-dvjzl\" (UID: \"b27ca4f7-1e58-4994-bc4f-1a751f08e628\") " pod="openstack/dnsmasq-dns-757b4f8459-dvjzl" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.034405 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b27ca4f7-1e58-4994-bc4f-1a751f08e628-dns-svc\") pod \"dnsmasq-dns-757b4f8459-dvjzl\" (UID: \"b27ca4f7-1e58-4994-bc4f-1a751f08e628\") " pod="openstack/dnsmasq-dns-757b4f8459-dvjzl" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.036054 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b27ca4f7-1e58-4994-bc4f-1a751f08e628-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-dvjzl\" (UID: \"b27ca4f7-1e58-4994-bc4f-1a751f08e628\") " pod="openstack/dnsmasq-dns-757b4f8459-dvjzl" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.037983 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a37519f4-08c5-407b-9374-0f636d4366fc-config-data\") pod \"nova-scheduler-0\" (UID: \"a37519f4-08c5-407b-9374-0f636d4366fc\") " pod="openstack/nova-scheduler-0" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.038165 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b27ca4f7-1e58-4994-bc4f-1a751f08e628-config\") pod \"dnsmasq-dns-757b4f8459-dvjzl\" (UID: \"b27ca4f7-1e58-4994-bc4f-1a751f08e628\") " pod="openstack/dnsmasq-dns-757b4f8459-dvjzl" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.038200 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a37519f4-08c5-407b-9374-0f636d4366fc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a37519f4-08c5-407b-9374-0f636d4366fc\") " pod="openstack/nova-scheduler-0" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.039889 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b27ca4f7-1e58-4994-bc4f-1a751f08e628-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-dvjzl\" (UID: \"b27ca4f7-1e58-4994-bc4f-1a751f08e628\") " pod="openstack/dnsmasq-dns-757b4f8459-dvjzl" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.065669 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt52z\" (UniqueName: \"kubernetes.io/projected/a37519f4-08c5-407b-9374-0f636d4366fc-kube-api-access-wt52z\") pod \"nova-scheduler-0\" (UID: \"a37519f4-08c5-407b-9374-0f636d4366fc\") " pod="openstack/nova-scheduler-0" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.068758 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjdvt\" (UniqueName: \"kubernetes.io/projected/b27ca4f7-1e58-4994-bc4f-1a751f08e628-kube-api-access-mjdvt\") pod \"dnsmasq-dns-757b4f8459-dvjzl\" (UID: \"b27ca4f7-1e58-4994-bc4f-1a751f08e628\") " pod="openstack/dnsmasq-dns-757b4f8459-dvjzl" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.133669 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjl2b\" (UniqueName: \"kubernetes.io/projected/78e42b57-0a6f-4dc0-81b2-729838f73c91-kube-api-access-jjl2b\") pod \"nova-cell1-novncproxy-0\" (UID: \"78e42b57-0a6f-4dc0-81b2-729838f73c91\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.133754 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e42b57-0a6f-4dc0-81b2-729838f73c91-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"78e42b57-0a6f-4dc0-81b2-729838f73c91\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.133823 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e42b57-0a6f-4dc0-81b2-729838f73c91-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"78e42b57-0a6f-4dc0-81b2-729838f73c91\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.137705 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e42b57-0a6f-4dc0-81b2-729838f73c91-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"78e42b57-0a6f-4dc0-81b2-729838f73c91\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.138939 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e42b57-0a6f-4dc0-81b2-729838f73c91-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"78e42b57-0a6f-4dc0-81b2-729838f73c91\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.151902 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjl2b\" (UniqueName: \"kubernetes.io/projected/78e42b57-0a6f-4dc0-81b2-729838f73c91-kube-api-access-jjl2b\") pod \"nova-cell1-novncproxy-0\" (UID: \"78e42b57-0a6f-4dc0-81b2-729838f73c91\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.200993 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.217098 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.238445 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.297943 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-dvjzl" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.311950 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.386589 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-tqbr8"] Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.450816 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nnw59"] Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.452503 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nnw59" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.464012 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.464150 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.472846 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nnw59"] Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.644380 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92882bc9-a33b-4129-82a3-9ea0900acece-scripts\") pod \"nova-cell1-conductor-db-sync-nnw59\" (UID: \"92882bc9-a33b-4129-82a3-9ea0900acece\") " pod="openstack/nova-cell1-conductor-db-sync-nnw59" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.644614 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92882bc9-a33b-4129-82a3-9ea0900acece-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nnw59\" (UID: \"92882bc9-a33b-4129-82a3-9ea0900acece\") " pod="openstack/nova-cell1-conductor-db-sync-nnw59" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.644636 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92882bc9-a33b-4129-82a3-9ea0900acece-config-data\") pod \"nova-cell1-conductor-db-sync-nnw59\" (UID: \"92882bc9-a33b-4129-82a3-9ea0900acece\") " pod="openstack/nova-cell1-conductor-db-sync-nnw59" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.644698 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgt9r\" (UniqueName: \"kubernetes.io/projected/92882bc9-a33b-4129-82a3-9ea0900acece-kube-api-access-qgt9r\") pod \"nova-cell1-conductor-db-sync-nnw59\" (UID: \"92882bc9-a33b-4129-82a3-9ea0900acece\") " pod="openstack/nova-cell1-conductor-db-sync-nnw59" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.696048 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-tqbr8" event={"ID":"11217722-8e69-4028-a7c3-036cfdefcb77","Type":"ContainerStarted","Data":"69a5aabd9213d2ac0aa5f6ec4c7060d9bffab325b22fa8faccce3de34ef2b5a3"} Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.696260 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-tqbr8" event={"ID":"11217722-8e69-4028-a7c3-036cfdefcb77","Type":"ContainerStarted","Data":"a807d424f2b9c8d5c22a3ffe0028788c1eaa1cd3df1c8f009697ecc1ffc74443"} Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.718110 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.745821 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92882bc9-a33b-4129-82a3-9ea0900acece-scripts\") pod \"nova-cell1-conductor-db-sync-nnw59\" (UID: \"92882bc9-a33b-4129-82a3-9ea0900acece\") " pod="openstack/nova-cell1-conductor-db-sync-nnw59" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.745867 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92882bc9-a33b-4129-82a3-9ea0900acece-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nnw59\" (UID: \"92882bc9-a33b-4129-82a3-9ea0900acece\") " pod="openstack/nova-cell1-conductor-db-sync-nnw59" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.745886 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92882bc9-a33b-4129-82a3-9ea0900acece-config-data\") pod \"nova-cell1-conductor-db-sync-nnw59\" (UID: \"92882bc9-a33b-4129-82a3-9ea0900acece\") " pod="openstack/nova-cell1-conductor-db-sync-nnw59" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.745931 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgt9r\" (UniqueName: \"kubernetes.io/projected/92882bc9-a33b-4129-82a3-9ea0900acece-kube-api-access-qgt9r\") pod \"nova-cell1-conductor-db-sync-nnw59\" (UID: \"92882bc9-a33b-4129-82a3-9ea0900acece\") " pod="openstack/nova-cell1-conductor-db-sync-nnw59" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.753020 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92882bc9-a33b-4129-82a3-9ea0900acece-config-data\") pod \"nova-cell1-conductor-db-sync-nnw59\" (UID: \"92882bc9-a33b-4129-82a3-9ea0900acece\") " pod="openstack/nova-cell1-conductor-db-sync-nnw59" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.753554 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92882bc9-a33b-4129-82a3-9ea0900acece-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nnw59\" (UID: \"92882bc9-a33b-4129-82a3-9ea0900acece\") " pod="openstack/nova-cell1-conductor-db-sync-nnw59" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.755141 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92882bc9-a33b-4129-82a3-9ea0900acece-scripts\") pod \"nova-cell1-conductor-db-sync-nnw59\" (UID: \"92882bc9-a33b-4129-82a3-9ea0900acece\") " pod="openstack/nova-cell1-conductor-db-sync-nnw59" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.761395 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.766450 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-tqbr8" podStartSLOduration=1.766412423 podStartE2EDuration="1.766412423s" podCreationTimestamp="2025-12-05 01:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:33:53.723544317 +0000 UTC m=+1532.099759668" watchObservedRunningTime="2025-12-05 01:33:53.766412423 +0000 UTC m=+1532.142627784" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.768958 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgt9r\" (UniqueName: \"kubernetes.io/projected/92882bc9-a33b-4129-82a3-9ea0900acece-kube-api-access-qgt9r\") pod \"nova-cell1-conductor-db-sync-nnw59\" (UID: \"92882bc9-a33b-4129-82a3-9ea0900acece\") " pod="openstack/nova-cell1-conductor-db-sync-nnw59" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.788179 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nnw59" Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.878237 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 01:33:53 crc kubenswrapper[4990]: I1205 01:33:53.956346 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-dvjzl"] Dec 05 01:33:54 crc kubenswrapper[4990]: I1205 01:33:54.046411 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 01:33:54 crc kubenswrapper[4990]: I1205 01:33:54.285424 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nnw59"] Dec 05 01:33:54 crc kubenswrapper[4990]: W1205 01:33:54.308347 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92882bc9_a33b_4129_82a3_9ea0900acece.slice/crio-301efee439b1c9c9731717f8bcdba638a0bcb239cde8adddbe9f0c43f11e8818 WatchSource:0}: Error finding container 301efee439b1c9c9731717f8bcdba638a0bcb239cde8adddbe9f0c43f11e8818: Status 404 returned error can't find the container with id 301efee439b1c9c9731717f8bcdba638a0bcb239cde8adddbe9f0c43f11e8818 Dec 05 01:33:54 crc kubenswrapper[4990]: I1205 01:33:54.488108 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6bt8p" Dec 05 01:33:54 crc kubenswrapper[4990]: I1205 01:33:54.488402 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6bt8p" Dec 05 01:33:54 crc kubenswrapper[4990]: I1205 01:33:54.709411 4990 generic.go:334] "Generic (PLEG): container finished" podID="b27ca4f7-1e58-4994-bc4f-1a751f08e628" containerID="c95ffe76a5951330701a50f1c0be95fb26965f5c6b814919400348bb0cf716fe" exitCode=0 Dec 05 01:33:54 crc kubenswrapper[4990]: I1205 01:33:54.709475 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-dvjzl" event={"ID":"b27ca4f7-1e58-4994-bc4f-1a751f08e628","Type":"ContainerDied","Data":"c95ffe76a5951330701a50f1c0be95fb26965f5c6b814919400348bb0cf716fe"} Dec 05 01:33:54 crc kubenswrapper[4990]: I1205 01:33:54.709545 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-dvjzl" event={"ID":"b27ca4f7-1e58-4994-bc4f-1a751f08e628","Type":"ContainerStarted","Data":"c4818ee5fcdd926f1ff8d706cf06613d924ef7a30dabe645dbf9b4547e091785"} Dec 05 01:33:54 crc kubenswrapper[4990]: I1205 01:33:54.713295 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"78e42b57-0a6f-4dc0-81b2-729838f73c91","Type":"ContainerStarted","Data":"fe6d564e79d52efa5f3bbe2a381a77538c3005d3128d2359391f826d1803c180"} Dec 05 01:33:54 crc kubenswrapper[4990]: I1205 01:33:54.717928 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c6f2ac73-99f6-4285-8a33-e59ffa88c462","Type":"ContainerStarted","Data":"76dc7e5b8aa447d73e16ee997e6a191363b1b4302fb7aaeedd5e9fe9918292f7"} Dec 05 01:33:54 crc kubenswrapper[4990]: I1205 01:33:54.757141 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb674288-65cb-4d21-ae86-24f620603203","Type":"ContainerStarted","Data":"4f4dd91215ca3692e21def5ea257fe53c96661b731978f498fdf8af9ff295eef"} Dec 05 01:33:54 crc kubenswrapper[4990]: I1205 01:33:54.760207 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a37519f4-08c5-407b-9374-0f636d4366fc","Type":"ContainerStarted","Data":"f29443665a851cb5f878b4d10bc9d715e0dbadf3748ac957d0caf5a3944a196d"} Dec 05 01:33:54 crc kubenswrapper[4990]: I1205 01:33:54.776317 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nnw59" event={"ID":"92882bc9-a33b-4129-82a3-9ea0900acece","Type":"ContainerStarted","Data":"20a15ff7d9ef03afb2290c70568d59c0cf2c772ab1f191fa1f665fe5087dd5d7"} Dec 05 01:33:54 crc kubenswrapper[4990]: I1205 01:33:54.776373 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nnw59" event={"ID":"92882bc9-a33b-4129-82a3-9ea0900acece","Type":"ContainerStarted","Data":"301efee439b1c9c9731717f8bcdba638a0bcb239cde8adddbe9f0c43f11e8818"} Dec 05 01:33:54 crc kubenswrapper[4990]: I1205 01:33:54.807328 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-nnw59" podStartSLOduration=1.807306608 podStartE2EDuration="1.807306608s" podCreationTimestamp="2025-12-05 01:33:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:33:54.791978904 +0000 UTC m=+1533.168194265" watchObservedRunningTime="2025-12-05 01:33:54.807306608 +0000 UTC m=+1533.183521969" Dec 05 01:33:55 crc kubenswrapper[4990]: I1205 01:33:55.550368 4990 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6bt8p" podUID="3379d337-9df6-4536-a98e-39a08d690d9f" containerName="registry-server" probeResult="failure" output=< Dec 05 01:33:55 crc kubenswrapper[4990]: timeout: failed to connect service ":50051" within 1s Dec 05 01:33:55 crc kubenswrapper[4990]: > Dec 05 01:33:55 crc kubenswrapper[4990]: I1205 01:33:55.790182 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-dvjzl" event={"ID":"b27ca4f7-1e58-4994-bc4f-1a751f08e628","Type":"ContainerStarted","Data":"355f508a6ba74b579c0a1d7879d2f885a7d91996f956db831c47eaff9fdff134"} Dec 05 01:33:55 crc kubenswrapper[4990]: I1205 01:33:55.818948 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-dvjzl" podStartSLOduration=3.818932733 podStartE2EDuration="3.818932733s" podCreationTimestamp="2025-12-05 01:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:33:55.818077689 +0000 UTC m=+1534.194293060" watchObservedRunningTime="2025-12-05 01:33:55.818932733 +0000 UTC m=+1534.195148094" Dec 05 01:33:56 crc kubenswrapper[4990]: I1205 01:33:56.532088 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 01:33:56 crc kubenswrapper[4990]: I1205 01:33:56.553540 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 01:33:56 crc kubenswrapper[4990]: I1205 01:33:56.802993 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-dvjzl" Dec 05 01:33:57 crc kubenswrapper[4990]: I1205 01:33:57.813759 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c6f2ac73-99f6-4285-8a33-e59ffa88c462","Type":"ContainerStarted","Data":"d17003a0c0ca3cf023603bb8c0869cb0ea0f1befa05085340696043f23a0ad48"} Dec 05 01:33:57 crc kubenswrapper[4990]: I1205 01:33:57.814517 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c6f2ac73-99f6-4285-8a33-e59ffa88c462","Type":"ContainerStarted","Data":"22b21180f73509b0eb010c4cbae22f33aa3cd1a6a9beda4eb679ce4eb3803499"} Dec 05 01:33:57 crc kubenswrapper[4990]: I1205 01:33:57.817368 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb674288-65cb-4d21-ae86-24f620603203","Type":"ContainerStarted","Data":"33e18624487a0649c17fcc4bee3e82daec971c85efad2a6add23dc9afee24ab3"} Dec 05 01:33:57 crc kubenswrapper[4990]: I1205 01:33:57.817501 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb674288-65cb-4d21-ae86-24f620603203","Type":"ContainerStarted","Data":"c189c80e4d1e8c0cb40b43bd6f7cfdd9ca6fd3bf68e0d1a4e9bcc9967adf47ac"} Dec 05 01:33:57 crc kubenswrapper[4990]: I1205 01:33:57.817463 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bb674288-65cb-4d21-ae86-24f620603203" containerName="nova-metadata-metadata" containerID="cri-o://33e18624487a0649c17fcc4bee3e82daec971c85efad2a6add23dc9afee24ab3" gracePeriod=30 Dec 05 01:33:57 crc kubenswrapper[4990]: I1205 01:33:57.817436 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bb674288-65cb-4d21-ae86-24f620603203" containerName="nova-metadata-log" containerID="cri-o://c189c80e4d1e8c0cb40b43bd6f7cfdd9ca6fd3bf68e0d1a4e9bcc9967adf47ac" gracePeriod=30 Dec 05 01:33:57 crc kubenswrapper[4990]: I1205 01:33:57.822826 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a37519f4-08c5-407b-9374-0f636d4366fc","Type":"ContainerStarted","Data":"78aa07ca22c10414893d5a27efc4344f9899a19efe7f18d9046886144f32e853"} Dec 05 01:33:57 crc kubenswrapper[4990]: I1205 01:33:57.829105 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"78e42b57-0a6f-4dc0-81b2-729838f73c91","Type":"ContainerStarted","Data":"d2611f2ce302131d0e577518b7875dd5f7375dc9cdc0ac8ff32dc7752b171c3f"} Dec 05 01:33:57 crc kubenswrapper[4990]: I1205 01:33:57.829348 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="78e42b57-0a6f-4dc0-81b2-729838f73c91" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://d2611f2ce302131d0e577518b7875dd5f7375dc9cdc0ac8ff32dc7752b171c3f" gracePeriod=30 Dec 05 01:33:57 crc kubenswrapper[4990]: I1205 01:33:57.838452 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.453108198 podStartE2EDuration="5.838429474s" podCreationTimestamp="2025-12-05 01:33:52 +0000 UTC" firstStartedPulling="2025-12-05 01:33:53.725438931 +0000 UTC m=+1532.101654292" lastFinishedPulling="2025-12-05 01:33:57.110760167 +0000 UTC m=+1535.486975568" observedRunningTime="2025-12-05 01:33:57.82946847 +0000 UTC m=+1536.205683841" watchObservedRunningTime="2025-12-05 01:33:57.838429474 +0000 UTC m=+1536.214644845" Dec 05 01:33:57 crc kubenswrapper[4990]: I1205 01:33:57.867336 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.643606923 podStartE2EDuration="5.867315004s" podCreationTimestamp="2025-12-05 01:33:52 +0000 UTC" firstStartedPulling="2025-12-05 01:33:53.894410095 +0000 UTC m=+1532.270625456" lastFinishedPulling="2025-12-05 01:33:57.118118176 +0000 UTC m=+1535.494333537" observedRunningTime="2025-12-05 01:33:57.849742365 +0000 UTC m=+1536.225957746" watchObservedRunningTime="2025-12-05 01:33:57.867315004 +0000 UTC m=+1536.243530365" Dec 05 01:33:57 crc kubenswrapper[4990]: I1205 01:33:57.877229 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.5237822039999998 podStartE2EDuration="5.877210955s" podCreationTimestamp="2025-12-05 01:33:52 +0000 UTC" firstStartedPulling="2025-12-05 01:33:53.749760191 +0000 UTC m=+1532.125975552" lastFinishedPulling="2025-12-05 01:33:57.103188932 +0000 UTC m=+1535.479404303" observedRunningTime="2025-12-05 01:33:57.874491568 +0000 UTC m=+1536.250706929" watchObservedRunningTime="2025-12-05 01:33:57.877210955 +0000 UTC m=+1536.253426326" Dec 05 01:33:57 crc kubenswrapper[4990]: I1205 01:33:57.896686 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.838619467 podStartE2EDuration="5.896659437s" podCreationTimestamp="2025-12-05 01:33:52 +0000 UTC" firstStartedPulling="2025-12-05 01:33:54.054693953 +0000 UTC m=+1532.430909314" lastFinishedPulling="2025-12-05 01:33:57.112733883 +0000 UTC m=+1535.488949284" observedRunningTime="2025-12-05 01:33:57.888051752 +0000 UTC m=+1536.264267113" watchObservedRunningTime="2025-12-05 01:33:57.896659437 +0000 UTC m=+1536.272874798" Dec 05 01:33:58 crc kubenswrapper[4990]: I1205 01:33:58.201457 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 05 01:33:58 crc kubenswrapper[4990]: I1205 01:33:58.201582 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 05 01:33:58 crc kubenswrapper[4990]: I1205 01:33:58.239570 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 05 01:33:58 crc kubenswrapper[4990]: I1205 01:33:58.313041 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:33:58 crc kubenswrapper[4990]: I1205 01:33:58.844650 4990 generic.go:334] "Generic (PLEG): container finished" podID="bb674288-65cb-4d21-ae86-24f620603203" containerID="c189c80e4d1e8c0cb40b43bd6f7cfdd9ca6fd3bf68e0d1a4e9bcc9967adf47ac" exitCode=143 Dec 05 01:33:58 crc kubenswrapper[4990]: I1205 01:33:58.846028 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb674288-65cb-4d21-ae86-24f620603203","Type":"ContainerDied","Data":"c189c80e4d1e8c0cb40b43bd6f7cfdd9ca6fd3bf68e0d1a4e9bcc9967adf47ac"} Dec 05 01:34:01 crc kubenswrapper[4990]: I1205 01:34:01.877443 4990 generic.go:334] "Generic (PLEG): container finished" podID="92882bc9-a33b-4129-82a3-9ea0900acece" containerID="20a15ff7d9ef03afb2290c70568d59c0cf2c772ab1f191fa1f665fe5087dd5d7" exitCode=0 Dec 05 01:34:01 crc kubenswrapper[4990]: I1205 01:34:01.877537 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nnw59" event={"ID":"92882bc9-a33b-4129-82a3-9ea0900acece","Type":"ContainerDied","Data":"20a15ff7d9ef03afb2290c70568d59c0cf2c772ab1f191fa1f665fe5087dd5d7"} Dec 05 01:34:01 crc kubenswrapper[4990]: I1205 01:34:01.880944 4990 generic.go:334] "Generic (PLEG): container finished" podID="11217722-8e69-4028-a7c3-036cfdefcb77" containerID="69a5aabd9213d2ac0aa5f6ec4c7060d9bffab325b22fa8faccce3de34ef2b5a3" exitCode=0 Dec 05 01:34:01 crc kubenswrapper[4990]: I1205 01:34:01.880979 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-tqbr8" event={"ID":"11217722-8e69-4028-a7c3-036cfdefcb77","Type":"ContainerDied","Data":"69a5aabd9213d2ac0aa5f6ec4c7060d9bffab325b22fa8faccce3de34ef2b5a3"} Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.217628 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.217888 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.240094 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.270202 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.299651 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-dvjzl" Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.389666 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-9mtfg"] Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.389858 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-9mtfg" podUID="80cbc6ec-8ce0-4957-9968-f7ce049b66d4" containerName="dnsmasq-dns" containerID="cri-o://fa49af118ebd203d1ad30570dcde553d0db1eaf7b3527a9f42ebfc6264fac626" gracePeriod=10 Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.403922 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-tqbr8" Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.466338 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nnw59" Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.554717 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92882bc9-a33b-4129-82a3-9ea0900acece-combined-ca-bundle\") pod \"92882bc9-a33b-4129-82a3-9ea0900acece\" (UID: \"92882bc9-a33b-4129-82a3-9ea0900acece\") " Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.554803 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92882bc9-a33b-4129-82a3-9ea0900acece-config-data\") pod \"92882bc9-a33b-4129-82a3-9ea0900acece\" (UID: \"92882bc9-a33b-4129-82a3-9ea0900acece\") " Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.554897 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9tz5\" (UniqueName: \"kubernetes.io/projected/11217722-8e69-4028-a7c3-036cfdefcb77-kube-api-access-c9tz5\") pod \"11217722-8e69-4028-a7c3-036cfdefcb77\" (UID: \"11217722-8e69-4028-a7c3-036cfdefcb77\") " Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.555050 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92882bc9-a33b-4129-82a3-9ea0900acece-scripts\") pod \"92882bc9-a33b-4129-82a3-9ea0900acece\" (UID: \"92882bc9-a33b-4129-82a3-9ea0900acece\") " Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.555112 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11217722-8e69-4028-a7c3-036cfdefcb77-combined-ca-bundle\") pod \"11217722-8e69-4028-a7c3-036cfdefcb77\" (UID: \"11217722-8e69-4028-a7c3-036cfdefcb77\") " Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.555904 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11217722-8e69-4028-a7c3-036cfdefcb77-config-data\") pod \"11217722-8e69-4028-a7c3-036cfdefcb77\" (UID: \"11217722-8e69-4028-a7c3-036cfdefcb77\") " Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.555936 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11217722-8e69-4028-a7c3-036cfdefcb77-scripts\") pod \"11217722-8e69-4028-a7c3-036cfdefcb77\" (UID: \"11217722-8e69-4028-a7c3-036cfdefcb77\") " Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.555972 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgt9r\" (UniqueName: \"kubernetes.io/projected/92882bc9-a33b-4129-82a3-9ea0900acece-kube-api-access-qgt9r\") pod \"92882bc9-a33b-4129-82a3-9ea0900acece\" (UID: \"92882bc9-a33b-4129-82a3-9ea0900acece\") " Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.563425 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11217722-8e69-4028-a7c3-036cfdefcb77-scripts" (OuterVolumeSpecName: "scripts") pod "11217722-8e69-4028-a7c3-036cfdefcb77" (UID: "11217722-8e69-4028-a7c3-036cfdefcb77"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.563596 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92882bc9-a33b-4129-82a3-9ea0900acece-scripts" (OuterVolumeSpecName: "scripts") pod "92882bc9-a33b-4129-82a3-9ea0900acece" (UID: "92882bc9-a33b-4129-82a3-9ea0900acece"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.570357 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92882bc9-a33b-4129-82a3-9ea0900acece-kube-api-access-qgt9r" (OuterVolumeSpecName: "kube-api-access-qgt9r") pod "92882bc9-a33b-4129-82a3-9ea0900acece" (UID: "92882bc9-a33b-4129-82a3-9ea0900acece"). InnerVolumeSpecName "kube-api-access-qgt9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.573474 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11217722-8e69-4028-a7c3-036cfdefcb77-kube-api-access-c9tz5" (OuterVolumeSpecName: "kube-api-access-c9tz5") pod "11217722-8e69-4028-a7c3-036cfdefcb77" (UID: "11217722-8e69-4028-a7c3-036cfdefcb77"). InnerVolumeSpecName "kube-api-access-c9tz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.596671 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11217722-8e69-4028-a7c3-036cfdefcb77-config-data" (OuterVolumeSpecName: "config-data") pod "11217722-8e69-4028-a7c3-036cfdefcb77" (UID: "11217722-8e69-4028-a7c3-036cfdefcb77"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.598452 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92882bc9-a33b-4129-82a3-9ea0900acece-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92882bc9-a33b-4129-82a3-9ea0900acece" (UID: "92882bc9-a33b-4129-82a3-9ea0900acece"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.598671 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92882bc9-a33b-4129-82a3-9ea0900acece-config-data" (OuterVolumeSpecName: "config-data") pod "92882bc9-a33b-4129-82a3-9ea0900acece" (UID: "92882bc9-a33b-4129-82a3-9ea0900acece"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.603628 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11217722-8e69-4028-a7c3-036cfdefcb77-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11217722-8e69-4028-a7c3-036cfdefcb77" (UID: "11217722-8e69-4028-a7c3-036cfdefcb77"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.658280 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9tz5\" (UniqueName: \"kubernetes.io/projected/11217722-8e69-4028-a7c3-036cfdefcb77-kube-api-access-c9tz5\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.658571 4990 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92882bc9-a33b-4129-82a3-9ea0900acece-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.658612 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11217722-8e69-4028-a7c3-036cfdefcb77-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.658648 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11217722-8e69-4028-a7c3-036cfdefcb77-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.658659 4990 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11217722-8e69-4028-a7c3-036cfdefcb77-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.658670 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgt9r\" (UniqueName: \"kubernetes.io/projected/92882bc9-a33b-4129-82a3-9ea0900acece-kube-api-access-qgt9r\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.658679 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92882bc9-a33b-4129-82a3-9ea0900acece-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.658689 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92882bc9-a33b-4129-82a3-9ea0900acece-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.838235 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-9mtfg" Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.902524 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nnw59" event={"ID":"92882bc9-a33b-4129-82a3-9ea0900acece","Type":"ContainerDied","Data":"301efee439b1c9c9731717f8bcdba638a0bcb239cde8adddbe9f0c43f11e8818"} Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.902543 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nnw59" Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.902564 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="301efee439b1c9c9731717f8bcdba638a0bcb239cde8adddbe9f0c43f11e8818" Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.904182 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-tqbr8" Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.904203 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-tqbr8" event={"ID":"11217722-8e69-4028-a7c3-036cfdefcb77","Type":"ContainerDied","Data":"a807d424f2b9c8d5c22a3ffe0028788c1eaa1cd3df1c8f009697ecc1ffc74443"} Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.904410 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a807d424f2b9c8d5c22a3ffe0028788c1eaa1cd3df1c8f009697ecc1ffc74443" Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.906002 4990 generic.go:334] "Generic (PLEG): container finished" podID="80cbc6ec-8ce0-4957-9968-f7ce049b66d4" containerID="fa49af118ebd203d1ad30570dcde553d0db1eaf7b3527a9f42ebfc6264fac626" exitCode=0 Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.907179 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-9mtfg" Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.907614 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-9mtfg" event={"ID":"80cbc6ec-8ce0-4957-9968-f7ce049b66d4","Type":"ContainerDied","Data":"fa49af118ebd203d1ad30570dcde553d0db1eaf7b3527a9f42ebfc6264fac626"} Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.907641 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-9mtfg" event={"ID":"80cbc6ec-8ce0-4957-9968-f7ce049b66d4","Type":"ContainerDied","Data":"382032ed03bf78ded0cb437fda88a1dec44fbcea3b7b176529f8ee14ff3e137f"} Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.907658 4990 scope.go:117] "RemoveContainer" containerID="fa49af118ebd203d1ad30570dcde553d0db1eaf7b3527a9f42ebfc6264fac626" Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.973525 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-ovsdbserver-sb\") pod \"80cbc6ec-8ce0-4957-9968-f7ce049b66d4\" (UID: \"80cbc6ec-8ce0-4957-9968-f7ce049b66d4\") " Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.975149 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jx22v\" (UniqueName: \"kubernetes.io/projected/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-kube-api-access-jx22v\") pod \"80cbc6ec-8ce0-4957-9968-f7ce049b66d4\" (UID: \"80cbc6ec-8ce0-4957-9968-f7ce049b66d4\") " Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.975331 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-ovsdbserver-nb\") pod \"80cbc6ec-8ce0-4957-9968-f7ce049b66d4\" (UID: \"80cbc6ec-8ce0-4957-9968-f7ce049b66d4\") " Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.975424 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-dns-swift-storage-0\") pod \"80cbc6ec-8ce0-4957-9968-f7ce049b66d4\" (UID: \"80cbc6ec-8ce0-4957-9968-f7ce049b66d4\") " Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.975470 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-config\") pod \"80cbc6ec-8ce0-4957-9968-f7ce049b66d4\" (UID: \"80cbc6ec-8ce0-4957-9968-f7ce049b66d4\") " Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.975598 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-dns-svc\") pod \"80cbc6ec-8ce0-4957-9968-f7ce049b66d4\" (UID: \"80cbc6ec-8ce0-4957-9968-f7ce049b66d4\") " Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.978277 4990 scope.go:117] "RemoveContainer" containerID="295bbf28419275cdbad11b47870480a8db3beb4d12c9e28b0663610efd73d873" Dec 05 01:34:03 crc kubenswrapper[4990]: I1205 01:34:03.985871 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.013411 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-kube-api-access-jx22v" (OuterVolumeSpecName: "kube-api-access-jx22v") pod "80cbc6ec-8ce0-4957-9968-f7ce049b66d4" (UID: "80cbc6ec-8ce0-4957-9968-f7ce049b66d4"). InnerVolumeSpecName "kube-api-access-jx22v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.025718 4990 scope.go:117] "RemoveContainer" containerID="fa49af118ebd203d1ad30570dcde553d0db1eaf7b3527a9f42ebfc6264fac626" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.026204 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 05 01:34:04 crc kubenswrapper[4990]: E1205 01:34:04.026268 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa49af118ebd203d1ad30570dcde553d0db1eaf7b3527a9f42ebfc6264fac626\": container with ID starting with fa49af118ebd203d1ad30570dcde553d0db1eaf7b3527a9f42ebfc6264fac626 not found: ID does not exist" containerID="fa49af118ebd203d1ad30570dcde553d0db1eaf7b3527a9f42ebfc6264fac626" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.026327 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa49af118ebd203d1ad30570dcde553d0db1eaf7b3527a9f42ebfc6264fac626"} err="failed to get container status \"fa49af118ebd203d1ad30570dcde553d0db1eaf7b3527a9f42ebfc6264fac626\": rpc error: code = NotFound desc = could not find container \"fa49af118ebd203d1ad30570dcde553d0db1eaf7b3527a9f42ebfc6264fac626\": container with ID starting with fa49af118ebd203d1ad30570dcde553d0db1eaf7b3527a9f42ebfc6264fac626 not found: ID does not exist" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.026375 4990 scope.go:117] "RemoveContainer" containerID="295bbf28419275cdbad11b47870480a8db3beb4d12c9e28b0663610efd73d873" Dec 05 01:34:04 crc kubenswrapper[4990]: E1205 01:34:04.027459 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80cbc6ec-8ce0-4957-9968-f7ce049b66d4" containerName="dnsmasq-dns" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.027498 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="80cbc6ec-8ce0-4957-9968-f7ce049b66d4" containerName="dnsmasq-dns" Dec 05 01:34:04 crc kubenswrapper[4990]: E1205 01:34:04.027536 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11217722-8e69-4028-a7c3-036cfdefcb77" containerName="nova-manage" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.027546 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="11217722-8e69-4028-a7c3-036cfdefcb77" containerName="nova-manage" Dec 05 01:34:04 crc kubenswrapper[4990]: E1205 01:34:04.027584 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92882bc9-a33b-4129-82a3-9ea0900acece" containerName="nova-cell1-conductor-db-sync" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.027593 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="92882bc9-a33b-4129-82a3-9ea0900acece" containerName="nova-cell1-conductor-db-sync" Dec 05 01:34:04 crc kubenswrapper[4990]: E1205 01:34:04.027628 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80cbc6ec-8ce0-4957-9968-f7ce049b66d4" containerName="init" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.027637 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="80cbc6ec-8ce0-4957-9968-f7ce049b66d4" containerName="init" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.028064 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="80cbc6ec-8ce0-4957-9968-f7ce049b66d4" containerName="dnsmasq-dns" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.028084 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="92882bc9-a33b-4129-82a3-9ea0900acece" containerName="nova-cell1-conductor-db-sync" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.028110 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="11217722-8e69-4028-a7c3-036cfdefcb77" containerName="nova-manage" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.029162 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 05 01:34:04 crc kubenswrapper[4990]: E1205 01:34:04.031386 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"295bbf28419275cdbad11b47870480a8db3beb4d12c9e28b0663610efd73d873\": container with ID starting with 295bbf28419275cdbad11b47870480a8db3beb4d12c9e28b0663610efd73d873 not found: ID does not exist" containerID="295bbf28419275cdbad11b47870480a8db3beb4d12c9e28b0663610efd73d873" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.031714 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.032062 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"295bbf28419275cdbad11b47870480a8db3beb4d12c9e28b0663610efd73d873"} err="failed to get container status \"295bbf28419275cdbad11b47870480a8db3beb4d12c9e28b0663610efd73d873\": rpc error: code = NotFound desc = could not find container \"295bbf28419275cdbad11b47870480a8db3beb4d12c9e28b0663610efd73d873\": container with ID starting with 295bbf28419275cdbad11b47870480a8db3beb4d12c9e28b0663610efd73d873 not found: ID does not exist" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.049441 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.058084 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "80cbc6ec-8ce0-4957-9968-f7ce049b66d4" (UID: "80cbc6ec-8ce0-4957-9968-f7ce049b66d4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.066815 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-config" (OuterVolumeSpecName: "config") pod "80cbc6ec-8ce0-4957-9968-f7ce049b66d4" (UID: "80cbc6ec-8ce0-4957-9968-f7ce049b66d4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.067876 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "80cbc6ec-8ce0-4957-9968-f7ce049b66d4" (UID: "80cbc6ec-8ce0-4957-9968-f7ce049b66d4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.075451 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "80cbc6ec-8ce0-4957-9968-f7ce049b66d4" (UID: "80cbc6ec-8ce0-4957-9968-f7ce049b66d4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.080656 4990 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.080698 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jx22v\" (UniqueName: \"kubernetes.io/projected/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-kube-api-access-jx22v\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.080751 4990 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.080777 4990 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.080786 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.097968 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "80cbc6ec-8ce0-4957-9968-f7ce049b66d4" (UID: "80cbc6ec-8ce0-4957-9968-f7ce049b66d4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.116968 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.120430 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c6f2ac73-99f6-4285-8a33-e59ffa88c462" containerName="nova-api-api" containerID="cri-o://d17003a0c0ca3cf023603bb8c0869cb0ea0f1befa05085340696043f23a0ad48" gracePeriod=30 Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.121553 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c6f2ac73-99f6-4285-8a33-e59ffa88c462" containerName="nova-api-log" containerID="cri-o://22b21180f73509b0eb010c4cbae22f33aa3cd1a6a9beda4eb679ce4eb3803499" gracePeriod=30 Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.132265 4990 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c6f2ac73-99f6-4285-8a33-e59ffa88c462" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.181:8774/\": EOF" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.132306 4990 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c6f2ac73-99f6-4285-8a33-e59ffa88c462" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.181:8774/\": EOF" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.182474 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23fef2f1-b3e2-4d6f-8beb-efd01386d758-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"23fef2f1-b3e2-4d6f-8beb-efd01386d758\") " pod="openstack/nova-cell1-conductor-0" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.182570 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23fef2f1-b3e2-4d6f-8beb-efd01386d758-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"23fef2f1-b3e2-4d6f-8beb-efd01386d758\") " pod="openstack/nova-cell1-conductor-0" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.182656 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcs4p\" (UniqueName: \"kubernetes.io/projected/23fef2f1-b3e2-4d6f-8beb-efd01386d758-kube-api-access-xcs4p\") pod \"nova-cell1-conductor-0\" (UID: \"23fef2f1-b3e2-4d6f-8beb-efd01386d758\") " pod="openstack/nova-cell1-conductor-0" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.182736 4990 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80cbc6ec-8ce0-4957-9968-f7ce049b66d4-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.247044 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-9mtfg"] Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.261747 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-9mtfg"] Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.284584 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcs4p\" (UniqueName: \"kubernetes.io/projected/23fef2f1-b3e2-4d6f-8beb-efd01386d758-kube-api-access-xcs4p\") pod \"nova-cell1-conductor-0\" (UID: \"23fef2f1-b3e2-4d6f-8beb-efd01386d758\") " pod="openstack/nova-cell1-conductor-0" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.284651 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23fef2f1-b3e2-4d6f-8beb-efd01386d758-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"23fef2f1-b3e2-4d6f-8beb-efd01386d758\") " pod="openstack/nova-cell1-conductor-0" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.284708 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23fef2f1-b3e2-4d6f-8beb-efd01386d758-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"23fef2f1-b3e2-4d6f-8beb-efd01386d758\") " pod="openstack/nova-cell1-conductor-0" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.289165 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23fef2f1-b3e2-4d6f-8beb-efd01386d758-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"23fef2f1-b3e2-4d6f-8beb-efd01386d758\") " pod="openstack/nova-cell1-conductor-0" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.289242 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23fef2f1-b3e2-4d6f-8beb-efd01386d758-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"23fef2f1-b3e2-4d6f-8beb-efd01386d758\") " pod="openstack/nova-cell1-conductor-0" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.300365 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcs4p\" (UniqueName: \"kubernetes.io/projected/23fef2f1-b3e2-4d6f-8beb-efd01386d758-kube-api-access-xcs4p\") pod \"nova-cell1-conductor-0\" (UID: \"23fef2f1-b3e2-4d6f-8beb-efd01386d758\") " pod="openstack/nova-cell1-conductor-0" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.363372 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.418758 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.534703 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6bt8p" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.597754 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6bt8p" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.775666 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6bt8p"] Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.838595 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.875279 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.920741 4990 generic.go:334] "Generic (PLEG): container finished" podID="c6f2ac73-99f6-4285-8a33-e59ffa88c462" containerID="22b21180f73509b0eb010c4cbae22f33aa3cd1a6a9beda4eb679ce4eb3803499" exitCode=143 Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.920811 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c6f2ac73-99f6-4285-8a33-e59ffa88c462","Type":"ContainerDied","Data":"22b21180f73509b0eb010c4cbae22f33aa3cd1a6a9beda4eb679ce4eb3803499"} Dec 05 01:34:04 crc kubenswrapper[4990]: I1205 01:34:04.922303 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"23fef2f1-b3e2-4d6f-8beb-efd01386d758","Type":"ContainerStarted","Data":"145e5a4dc36d1dfbe72961ae42699553ff038cc54c52c7feca0037e661d14b43"} Dec 05 01:34:05 crc kubenswrapper[4990]: I1205 01:34:05.939433 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="a37519f4-08c5-407b-9374-0f636d4366fc" containerName="nova-scheduler-scheduler" containerID="cri-o://78aa07ca22c10414893d5a27efc4344f9899a19efe7f18d9046886144f32e853" gracePeriod=30 Dec 05 01:34:05 crc kubenswrapper[4990]: I1205 01:34:05.939586 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6bt8p" podUID="3379d337-9df6-4536-a98e-39a08d690d9f" containerName="registry-server" containerID="cri-o://f89daa114b815dc0d9e25d0db39c10b967782ac4c8249ffc44b4931046abef68" gracePeriod=2 Dec 05 01:34:05 crc kubenswrapper[4990]: I1205 01:34:05.945577 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80cbc6ec-8ce0-4957-9968-f7ce049b66d4" path="/var/lib/kubelet/pods/80cbc6ec-8ce0-4957-9968-f7ce049b66d4/volumes" Dec 05 01:34:05 crc kubenswrapper[4990]: I1205 01:34:05.946620 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"23fef2f1-b3e2-4d6f-8beb-efd01386d758","Type":"ContainerStarted","Data":"99e3f4b0483634358a7d1235ce5eb8570a7f7ed07fa299cea8aa4652c97c14e8"} Dec 05 01:34:05 crc kubenswrapper[4990]: I1205 01:34:05.971210 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.971190326 podStartE2EDuration="2.971190326s" podCreationTimestamp="2025-12-05 01:34:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:34:05.962747307 +0000 UTC m=+1544.338962678" watchObservedRunningTime="2025-12-05 01:34:05.971190326 +0000 UTC m=+1544.347405697" Dec 05 01:34:06 crc kubenswrapper[4990]: I1205 01:34:06.443844 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6bt8p" Dec 05 01:34:06 crc kubenswrapper[4990]: I1205 01:34:06.633309 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctjvm\" (UniqueName: \"kubernetes.io/projected/3379d337-9df6-4536-a98e-39a08d690d9f-kube-api-access-ctjvm\") pod \"3379d337-9df6-4536-a98e-39a08d690d9f\" (UID: \"3379d337-9df6-4536-a98e-39a08d690d9f\") " Dec 05 01:34:06 crc kubenswrapper[4990]: I1205 01:34:06.633685 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3379d337-9df6-4536-a98e-39a08d690d9f-utilities\") pod \"3379d337-9df6-4536-a98e-39a08d690d9f\" (UID: \"3379d337-9df6-4536-a98e-39a08d690d9f\") " Dec 05 01:34:06 crc kubenswrapper[4990]: I1205 01:34:06.633818 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3379d337-9df6-4536-a98e-39a08d690d9f-catalog-content\") pod \"3379d337-9df6-4536-a98e-39a08d690d9f\" (UID: \"3379d337-9df6-4536-a98e-39a08d690d9f\") " Dec 05 01:34:06 crc kubenswrapper[4990]: I1205 01:34:06.634301 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3379d337-9df6-4536-a98e-39a08d690d9f-utilities" (OuterVolumeSpecName: "utilities") pod "3379d337-9df6-4536-a98e-39a08d690d9f" (UID: "3379d337-9df6-4536-a98e-39a08d690d9f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:34:06 crc kubenswrapper[4990]: I1205 01:34:06.639906 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3379d337-9df6-4536-a98e-39a08d690d9f-kube-api-access-ctjvm" (OuterVolumeSpecName: "kube-api-access-ctjvm") pod "3379d337-9df6-4536-a98e-39a08d690d9f" (UID: "3379d337-9df6-4536-a98e-39a08d690d9f"). InnerVolumeSpecName "kube-api-access-ctjvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:34:06 crc kubenswrapper[4990]: I1205 01:34:06.735333 4990 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3379d337-9df6-4536-a98e-39a08d690d9f-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:06 crc kubenswrapper[4990]: I1205 01:34:06.735361 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctjvm\" (UniqueName: \"kubernetes.io/projected/3379d337-9df6-4536-a98e-39a08d690d9f-kube-api-access-ctjvm\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:06 crc kubenswrapper[4990]: I1205 01:34:06.736161 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3379d337-9df6-4536-a98e-39a08d690d9f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3379d337-9df6-4536-a98e-39a08d690d9f" (UID: "3379d337-9df6-4536-a98e-39a08d690d9f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:34:06 crc kubenswrapper[4990]: I1205 01:34:06.836645 4990 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3379d337-9df6-4536-a98e-39a08d690d9f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:06 crc kubenswrapper[4990]: I1205 01:34:06.948123 4990 generic.go:334] "Generic (PLEG): container finished" podID="3379d337-9df6-4536-a98e-39a08d690d9f" containerID="f89daa114b815dc0d9e25d0db39c10b967782ac4c8249ffc44b4931046abef68" exitCode=0 Dec 05 01:34:06 crc kubenswrapper[4990]: I1205 01:34:06.948175 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6bt8p" event={"ID":"3379d337-9df6-4536-a98e-39a08d690d9f","Type":"ContainerDied","Data":"f89daa114b815dc0d9e25d0db39c10b967782ac4c8249ffc44b4931046abef68"} Dec 05 01:34:06 crc kubenswrapper[4990]: I1205 01:34:06.948240 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6bt8p" event={"ID":"3379d337-9df6-4536-a98e-39a08d690d9f","Type":"ContainerDied","Data":"63a846c73fd947f19f7a2f7618707806935b2a5e2ab552aa37fb8ea945673a4f"} Dec 05 01:34:06 crc kubenswrapper[4990]: I1205 01:34:06.948265 4990 scope.go:117] "RemoveContainer" containerID="f89daa114b815dc0d9e25d0db39c10b967782ac4c8249ffc44b4931046abef68" Dec 05 01:34:06 crc kubenswrapper[4990]: I1205 01:34:06.948199 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6bt8p" Dec 05 01:34:06 crc kubenswrapper[4990]: I1205 01:34:06.949145 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 05 01:34:06 crc kubenswrapper[4990]: I1205 01:34:06.976843 4990 scope.go:117] "RemoveContainer" containerID="f8ab03bd37dfc945fec438eca4e12f55ae500de982392c9e451cb5689a78f5f7" Dec 05 01:34:07 crc kubenswrapper[4990]: I1205 01:34:07.012375 4990 scope.go:117] "RemoveContainer" containerID="c8bcdd217446bc3c574e5aac59cf99bd5164431c013708b1eec38c51290f2f75" Dec 05 01:34:07 crc kubenswrapper[4990]: I1205 01:34:07.017848 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6bt8p"] Dec 05 01:34:07 crc kubenswrapper[4990]: I1205 01:34:07.027011 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6bt8p"] Dec 05 01:34:07 crc kubenswrapper[4990]: I1205 01:34:07.059906 4990 scope.go:117] "RemoveContainer" containerID="f89daa114b815dc0d9e25d0db39c10b967782ac4c8249ffc44b4931046abef68" Dec 05 01:34:07 crc kubenswrapper[4990]: E1205 01:34:07.060555 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f89daa114b815dc0d9e25d0db39c10b967782ac4c8249ffc44b4931046abef68\": container with ID starting with f89daa114b815dc0d9e25d0db39c10b967782ac4c8249ffc44b4931046abef68 not found: ID does not exist" containerID="f89daa114b815dc0d9e25d0db39c10b967782ac4c8249ffc44b4931046abef68" Dec 05 01:34:07 crc kubenswrapper[4990]: I1205 01:34:07.063001 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f89daa114b815dc0d9e25d0db39c10b967782ac4c8249ffc44b4931046abef68"} err="failed to get container status \"f89daa114b815dc0d9e25d0db39c10b967782ac4c8249ffc44b4931046abef68\": rpc error: code = NotFound desc = could not find container \"f89daa114b815dc0d9e25d0db39c10b967782ac4c8249ffc44b4931046abef68\": container with ID starting with f89daa114b815dc0d9e25d0db39c10b967782ac4c8249ffc44b4931046abef68 not found: ID does not exist" Dec 05 01:34:07 crc kubenswrapper[4990]: I1205 01:34:07.063035 4990 scope.go:117] "RemoveContainer" containerID="f8ab03bd37dfc945fec438eca4e12f55ae500de982392c9e451cb5689a78f5f7" Dec 05 01:34:07 crc kubenswrapper[4990]: E1205 01:34:07.064180 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8ab03bd37dfc945fec438eca4e12f55ae500de982392c9e451cb5689a78f5f7\": container with ID starting with f8ab03bd37dfc945fec438eca4e12f55ae500de982392c9e451cb5689a78f5f7 not found: ID does not exist" containerID="f8ab03bd37dfc945fec438eca4e12f55ae500de982392c9e451cb5689a78f5f7" Dec 05 01:34:07 crc kubenswrapper[4990]: I1205 01:34:07.064218 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8ab03bd37dfc945fec438eca4e12f55ae500de982392c9e451cb5689a78f5f7"} err="failed to get container status \"f8ab03bd37dfc945fec438eca4e12f55ae500de982392c9e451cb5689a78f5f7\": rpc error: code = NotFound desc = could not find container \"f8ab03bd37dfc945fec438eca4e12f55ae500de982392c9e451cb5689a78f5f7\": container with ID starting with f8ab03bd37dfc945fec438eca4e12f55ae500de982392c9e451cb5689a78f5f7 not found: ID does not exist" Dec 05 01:34:07 crc kubenswrapper[4990]: I1205 01:34:07.064240 4990 scope.go:117] "RemoveContainer" containerID="c8bcdd217446bc3c574e5aac59cf99bd5164431c013708b1eec38c51290f2f75" Dec 05 01:34:07 crc kubenswrapper[4990]: E1205 01:34:07.064697 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8bcdd217446bc3c574e5aac59cf99bd5164431c013708b1eec38c51290f2f75\": container with ID starting with c8bcdd217446bc3c574e5aac59cf99bd5164431c013708b1eec38c51290f2f75 not found: ID does not exist" containerID="c8bcdd217446bc3c574e5aac59cf99bd5164431c013708b1eec38c51290f2f75" Dec 05 01:34:07 crc kubenswrapper[4990]: I1205 01:34:07.064728 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8bcdd217446bc3c574e5aac59cf99bd5164431c013708b1eec38c51290f2f75"} err="failed to get container status \"c8bcdd217446bc3c574e5aac59cf99bd5164431c013708b1eec38c51290f2f75\": rpc error: code = NotFound desc = could not find container \"c8bcdd217446bc3c574e5aac59cf99bd5164431c013708b1eec38c51290f2f75\": container with ID starting with c8bcdd217446bc3c574e5aac59cf99bd5164431c013708b1eec38c51290f2f75 not found: ID does not exist" Dec 05 01:34:07 crc kubenswrapper[4990]: I1205 01:34:07.949615 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3379d337-9df6-4536-a98e-39a08d690d9f" path="/var/lib/kubelet/pods/3379d337-9df6-4536-a98e-39a08d690d9f/volumes" Dec 05 01:34:08 crc kubenswrapper[4990]: E1205 01:34:08.242284 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="78aa07ca22c10414893d5a27efc4344f9899a19efe7f18d9046886144f32e853" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 01:34:08 crc kubenswrapper[4990]: E1205 01:34:08.246802 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="78aa07ca22c10414893d5a27efc4344f9899a19efe7f18d9046886144f32e853" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 01:34:08 crc kubenswrapper[4990]: E1205 01:34:08.252253 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="78aa07ca22c10414893d5a27efc4344f9899a19efe7f18d9046886144f32e853" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 01:34:08 crc kubenswrapper[4990]: E1205 01:34:08.252344 4990 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="a37519f4-08c5-407b-9374-0f636d4366fc" containerName="nova-scheduler-scheduler" Dec 05 01:34:08 crc kubenswrapper[4990]: I1205 01:34:08.934580 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 01:34:08 crc kubenswrapper[4990]: I1205 01:34:08.972673 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 01:34:08 crc kubenswrapper[4990]: I1205 01:34:08.972883 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="a53819fb-5ed6-4c06-8b50-9afd98a4ffb7" containerName="kube-state-metrics" containerID="cri-o://45eea5d4b0311c3b9c64fb460ad3ed9b839b260bb1790b291ea222832d4538a0" gracePeriod=30 Dec 05 01:34:08 crc kubenswrapper[4990]: I1205 01:34:08.985310 4990 generic.go:334] "Generic (PLEG): container finished" podID="a37519f4-08c5-407b-9374-0f636d4366fc" containerID="78aa07ca22c10414893d5a27efc4344f9899a19efe7f18d9046886144f32e853" exitCode=0 Dec 05 01:34:08 crc kubenswrapper[4990]: I1205 01:34:08.985567 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 01:34:08 crc kubenswrapper[4990]: I1205 01:34:08.985752 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a37519f4-08c5-407b-9374-0f636d4366fc","Type":"ContainerDied","Data":"78aa07ca22c10414893d5a27efc4344f9899a19efe7f18d9046886144f32e853"} Dec 05 01:34:08 crc kubenswrapper[4990]: I1205 01:34:08.985821 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a37519f4-08c5-407b-9374-0f636d4366fc","Type":"ContainerDied","Data":"f29443665a851cb5f878b4d10bc9d715e0dbadf3748ac957d0caf5a3944a196d"} Dec 05 01:34:08 crc kubenswrapper[4990]: I1205 01:34:08.985842 4990 scope.go:117] "RemoveContainer" containerID="78aa07ca22c10414893d5a27efc4344f9899a19efe7f18d9046886144f32e853" Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.009663 4990 scope.go:117] "RemoveContainer" containerID="78aa07ca22c10414893d5a27efc4344f9899a19efe7f18d9046886144f32e853" Dec 05 01:34:09 crc kubenswrapper[4990]: E1205 01:34:09.010088 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78aa07ca22c10414893d5a27efc4344f9899a19efe7f18d9046886144f32e853\": container with ID starting with 78aa07ca22c10414893d5a27efc4344f9899a19efe7f18d9046886144f32e853 not found: ID does not exist" containerID="78aa07ca22c10414893d5a27efc4344f9899a19efe7f18d9046886144f32e853" Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.010130 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78aa07ca22c10414893d5a27efc4344f9899a19efe7f18d9046886144f32e853"} err="failed to get container status \"78aa07ca22c10414893d5a27efc4344f9899a19efe7f18d9046886144f32e853\": rpc error: code = NotFound desc = could not find container \"78aa07ca22c10414893d5a27efc4344f9899a19efe7f18d9046886144f32e853\": container with ID starting with 78aa07ca22c10414893d5a27efc4344f9899a19efe7f18d9046886144f32e853 not found: ID does not exist" Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.081250 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a37519f4-08c5-407b-9374-0f636d4366fc-config-data\") pod \"a37519f4-08c5-407b-9374-0f636d4366fc\" (UID: \"a37519f4-08c5-407b-9374-0f636d4366fc\") " Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.081364 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt52z\" (UniqueName: \"kubernetes.io/projected/a37519f4-08c5-407b-9374-0f636d4366fc-kube-api-access-wt52z\") pod \"a37519f4-08c5-407b-9374-0f636d4366fc\" (UID: \"a37519f4-08c5-407b-9374-0f636d4366fc\") " Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.081405 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a37519f4-08c5-407b-9374-0f636d4366fc-combined-ca-bundle\") pod \"a37519f4-08c5-407b-9374-0f636d4366fc\" (UID: \"a37519f4-08c5-407b-9374-0f636d4366fc\") " Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.090043 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a37519f4-08c5-407b-9374-0f636d4366fc-kube-api-access-wt52z" (OuterVolumeSpecName: "kube-api-access-wt52z") pod "a37519f4-08c5-407b-9374-0f636d4366fc" (UID: "a37519f4-08c5-407b-9374-0f636d4366fc"). InnerVolumeSpecName "kube-api-access-wt52z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.113335 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a37519f4-08c5-407b-9374-0f636d4366fc-config-data" (OuterVolumeSpecName: "config-data") pod "a37519f4-08c5-407b-9374-0f636d4366fc" (UID: "a37519f4-08c5-407b-9374-0f636d4366fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.119628 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a37519f4-08c5-407b-9374-0f636d4366fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a37519f4-08c5-407b-9374-0f636d4366fc" (UID: "a37519f4-08c5-407b-9374-0f636d4366fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.183358 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt52z\" (UniqueName: \"kubernetes.io/projected/a37519f4-08c5-407b-9374-0f636d4366fc-kube-api-access-wt52z\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.183680 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a37519f4-08c5-407b-9374-0f636d4366fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.183695 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a37519f4-08c5-407b-9374-0f636d4366fc-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.335471 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.359794 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.373698 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 01:34:09 crc kubenswrapper[4990]: E1205 01:34:09.374141 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3379d337-9df6-4536-a98e-39a08d690d9f" containerName="extract-content" Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.374157 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="3379d337-9df6-4536-a98e-39a08d690d9f" containerName="extract-content" Dec 05 01:34:09 crc kubenswrapper[4990]: E1205 01:34:09.374172 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3379d337-9df6-4536-a98e-39a08d690d9f" containerName="extract-utilities" Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.374179 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="3379d337-9df6-4536-a98e-39a08d690d9f" containerName="extract-utilities" Dec 05 01:34:09 crc kubenswrapper[4990]: E1205 01:34:09.374205 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a37519f4-08c5-407b-9374-0f636d4366fc" containerName="nova-scheduler-scheduler" Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.374212 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="a37519f4-08c5-407b-9374-0f636d4366fc" containerName="nova-scheduler-scheduler" Dec 05 01:34:09 crc kubenswrapper[4990]: E1205 01:34:09.374221 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3379d337-9df6-4536-a98e-39a08d690d9f" containerName="registry-server" Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.374226 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="3379d337-9df6-4536-a98e-39a08d690d9f" containerName="registry-server" Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.374420 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="3379d337-9df6-4536-a98e-39a08d690d9f" containerName="registry-server" Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.374435 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="a37519f4-08c5-407b-9374-0f636d4366fc" containerName="nova-scheduler-scheduler" Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.375032 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.389587 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.390121 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.460044 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.494015 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4da319ab-4e7d-4159-b2e8-6cdb92838859-config-data\") pod \"nova-scheduler-0\" (UID: \"4da319ab-4e7d-4159-b2e8-6cdb92838859\") " pod="openstack/nova-scheduler-0" Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.494090 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4da319ab-4e7d-4159-b2e8-6cdb92838859-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4da319ab-4e7d-4159-b2e8-6cdb92838859\") " pod="openstack/nova-scheduler-0" Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.494131 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlzrc\" (UniqueName: \"kubernetes.io/projected/4da319ab-4e7d-4159-b2e8-6cdb92838859-kube-api-access-xlzrc\") pod \"nova-scheduler-0\" (UID: \"4da319ab-4e7d-4159-b2e8-6cdb92838859\") " pod="openstack/nova-scheduler-0" Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.595340 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62497\" (UniqueName: \"kubernetes.io/projected/a53819fb-5ed6-4c06-8b50-9afd98a4ffb7-kube-api-access-62497\") pod \"a53819fb-5ed6-4c06-8b50-9afd98a4ffb7\" (UID: \"a53819fb-5ed6-4c06-8b50-9afd98a4ffb7\") " Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.595741 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4da319ab-4e7d-4159-b2e8-6cdb92838859-config-data\") pod \"nova-scheduler-0\" (UID: \"4da319ab-4e7d-4159-b2e8-6cdb92838859\") " pod="openstack/nova-scheduler-0" Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.595807 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4da319ab-4e7d-4159-b2e8-6cdb92838859-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4da319ab-4e7d-4159-b2e8-6cdb92838859\") " pod="openstack/nova-scheduler-0" Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.595847 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlzrc\" (UniqueName: \"kubernetes.io/projected/4da319ab-4e7d-4159-b2e8-6cdb92838859-kube-api-access-xlzrc\") pod \"nova-scheduler-0\" (UID: \"4da319ab-4e7d-4159-b2e8-6cdb92838859\") " pod="openstack/nova-scheduler-0" Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.599838 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4da319ab-4e7d-4159-b2e8-6cdb92838859-config-data\") pod \"nova-scheduler-0\" (UID: \"4da319ab-4e7d-4159-b2e8-6cdb92838859\") " pod="openstack/nova-scheduler-0" Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.599955 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a53819fb-5ed6-4c06-8b50-9afd98a4ffb7-kube-api-access-62497" (OuterVolumeSpecName: "kube-api-access-62497") pod "a53819fb-5ed6-4c06-8b50-9afd98a4ffb7" (UID: "a53819fb-5ed6-4c06-8b50-9afd98a4ffb7"). InnerVolumeSpecName "kube-api-access-62497". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.600324 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4da319ab-4e7d-4159-b2e8-6cdb92838859-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4da319ab-4e7d-4159-b2e8-6cdb92838859\") " pod="openstack/nova-scheduler-0" Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.612064 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlzrc\" (UniqueName: \"kubernetes.io/projected/4da319ab-4e7d-4159-b2e8-6cdb92838859-kube-api-access-xlzrc\") pod \"nova-scheduler-0\" (UID: \"4da319ab-4e7d-4159-b2e8-6cdb92838859\") " pod="openstack/nova-scheduler-0" Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.697709 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62497\" (UniqueName: \"kubernetes.io/projected/a53819fb-5ed6-4c06-8b50-9afd98a4ffb7-kube-api-access-62497\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.715428 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 01:34:09 crc kubenswrapper[4990]: I1205 01:34:09.945201 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a37519f4-08c5-407b-9374-0f636d4366fc" path="/var/lib/kubelet/pods/a37519f4-08c5-407b-9374-0f636d4366fc/volumes" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.016070 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.029179 4990 generic.go:334] "Generic (PLEG): container finished" podID="c6f2ac73-99f6-4285-8a33-e59ffa88c462" containerID="d17003a0c0ca3cf023603bb8c0869cb0ea0f1befa05085340696043f23a0ad48" exitCode=0 Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.029257 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c6f2ac73-99f6-4285-8a33-e59ffa88c462","Type":"ContainerDied","Data":"d17003a0c0ca3cf023603bb8c0869cb0ea0f1befa05085340696043f23a0ad48"} Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.029286 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c6f2ac73-99f6-4285-8a33-e59ffa88c462","Type":"ContainerDied","Data":"76dc7e5b8aa447d73e16ee997e6a191363b1b4302fb7aaeedd5e9fe9918292f7"} Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.029304 4990 scope.go:117] "RemoveContainer" containerID="d17003a0c0ca3cf023603bb8c0869cb0ea0f1befa05085340696043f23a0ad48" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.034101 4990 generic.go:334] "Generic (PLEG): container finished" podID="a53819fb-5ed6-4c06-8b50-9afd98a4ffb7" containerID="45eea5d4b0311c3b9c64fb460ad3ed9b839b260bb1790b291ea222832d4538a0" exitCode=2 Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.034142 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a53819fb-5ed6-4c06-8b50-9afd98a4ffb7","Type":"ContainerDied","Data":"45eea5d4b0311c3b9c64fb460ad3ed9b839b260bb1790b291ea222832d4538a0"} Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.034180 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a53819fb-5ed6-4c06-8b50-9afd98a4ffb7","Type":"ContainerDied","Data":"cb9b495d9c8a43cb121cc560e4107ab77fc108c9c0086bd6c51a745f8475e299"} Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.034233 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.103598 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6f2ac73-99f6-4285-8a33-e59ffa88c462-config-data\") pod \"c6f2ac73-99f6-4285-8a33-e59ffa88c462\" (UID: \"c6f2ac73-99f6-4285-8a33-e59ffa88c462\") " Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.103681 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6f2ac73-99f6-4285-8a33-e59ffa88c462-logs\") pod \"c6f2ac73-99f6-4285-8a33-e59ffa88c462\" (UID: \"c6f2ac73-99f6-4285-8a33-e59ffa88c462\") " Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.103708 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6f2ac73-99f6-4285-8a33-e59ffa88c462-combined-ca-bundle\") pod \"c6f2ac73-99f6-4285-8a33-e59ffa88c462\" (UID: \"c6f2ac73-99f6-4285-8a33-e59ffa88c462\") " Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.103784 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggk8x\" (UniqueName: \"kubernetes.io/projected/c6f2ac73-99f6-4285-8a33-e59ffa88c462-kube-api-access-ggk8x\") pod \"c6f2ac73-99f6-4285-8a33-e59ffa88c462\" (UID: \"c6f2ac73-99f6-4285-8a33-e59ffa88c462\") " Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.105758 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6f2ac73-99f6-4285-8a33-e59ffa88c462-logs" (OuterVolumeSpecName: "logs") pod "c6f2ac73-99f6-4285-8a33-e59ffa88c462" (UID: "c6f2ac73-99f6-4285-8a33-e59ffa88c462"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.122981 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.123632 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6f2ac73-99f6-4285-8a33-e59ffa88c462-kube-api-access-ggk8x" (OuterVolumeSpecName: "kube-api-access-ggk8x") pod "c6f2ac73-99f6-4285-8a33-e59ffa88c462" (UID: "c6f2ac73-99f6-4285-8a33-e59ffa88c462"). InnerVolumeSpecName "kube-api-access-ggk8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.129961 4990 scope.go:117] "RemoveContainer" containerID="22b21180f73509b0eb010c4cbae22f33aa3cd1a6a9beda4eb679ce4eb3803499" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.142006 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.166572 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 01:34:10 crc kubenswrapper[4990]: E1205 01:34:10.167625 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a53819fb-5ed6-4c06-8b50-9afd98a4ffb7" containerName="kube-state-metrics" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.167718 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="a53819fb-5ed6-4c06-8b50-9afd98a4ffb7" containerName="kube-state-metrics" Dec 05 01:34:10 crc kubenswrapper[4990]: E1205 01:34:10.167803 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6f2ac73-99f6-4285-8a33-e59ffa88c462" containerName="nova-api-api" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.167858 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6f2ac73-99f6-4285-8a33-e59ffa88c462" containerName="nova-api-api" Dec 05 01:34:10 crc kubenswrapper[4990]: E1205 01:34:10.167910 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6f2ac73-99f6-4285-8a33-e59ffa88c462" containerName="nova-api-log" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.167956 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6f2ac73-99f6-4285-8a33-e59ffa88c462" containerName="nova-api-log" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.168191 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6f2ac73-99f6-4285-8a33-e59ffa88c462" containerName="nova-api-api" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.168271 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6f2ac73-99f6-4285-8a33-e59ffa88c462" containerName="nova-api-log" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.168325 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="a53819fb-5ed6-4c06-8b50-9afd98a4ffb7" containerName="kube-state-metrics" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.169069 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.180977 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.181130 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.187166 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.194195 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6f2ac73-99f6-4285-8a33-e59ffa88c462-config-data" (OuterVolumeSpecName: "config-data") pod "c6f2ac73-99f6-4285-8a33-e59ffa88c462" (UID: "c6f2ac73-99f6-4285-8a33-e59ffa88c462"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.212219 4990 scope.go:117] "RemoveContainer" containerID="d17003a0c0ca3cf023603bb8c0869cb0ea0f1befa05085340696043f23a0ad48" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.213577 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggk8x\" (UniqueName: \"kubernetes.io/projected/c6f2ac73-99f6-4285-8a33-e59ffa88c462-kube-api-access-ggk8x\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.213600 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6f2ac73-99f6-4285-8a33-e59ffa88c462-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.213613 4990 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6f2ac73-99f6-4285-8a33-e59ffa88c462-logs\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:10 crc kubenswrapper[4990]: E1205 01:34:10.213826 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d17003a0c0ca3cf023603bb8c0869cb0ea0f1befa05085340696043f23a0ad48\": container with ID starting with d17003a0c0ca3cf023603bb8c0869cb0ea0f1befa05085340696043f23a0ad48 not found: ID does not exist" containerID="d17003a0c0ca3cf023603bb8c0869cb0ea0f1befa05085340696043f23a0ad48" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.213854 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d17003a0c0ca3cf023603bb8c0869cb0ea0f1befa05085340696043f23a0ad48"} err="failed to get container status \"d17003a0c0ca3cf023603bb8c0869cb0ea0f1befa05085340696043f23a0ad48\": rpc error: code = NotFound desc = could not find container \"d17003a0c0ca3cf023603bb8c0869cb0ea0f1befa05085340696043f23a0ad48\": container with ID starting with d17003a0c0ca3cf023603bb8c0869cb0ea0f1befa05085340696043f23a0ad48 not found: ID does not exist" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.213877 4990 scope.go:117] "RemoveContainer" containerID="22b21180f73509b0eb010c4cbae22f33aa3cd1a6a9beda4eb679ce4eb3803499" Dec 05 01:34:10 crc kubenswrapper[4990]: E1205 01:34:10.216170 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22b21180f73509b0eb010c4cbae22f33aa3cd1a6a9beda4eb679ce4eb3803499\": container with ID starting with 22b21180f73509b0eb010c4cbae22f33aa3cd1a6a9beda4eb679ce4eb3803499 not found: ID does not exist" containerID="22b21180f73509b0eb010c4cbae22f33aa3cd1a6a9beda4eb679ce4eb3803499" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.216255 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22b21180f73509b0eb010c4cbae22f33aa3cd1a6a9beda4eb679ce4eb3803499"} err="failed to get container status \"22b21180f73509b0eb010c4cbae22f33aa3cd1a6a9beda4eb679ce4eb3803499\": rpc error: code = NotFound desc = could not find container \"22b21180f73509b0eb010c4cbae22f33aa3cd1a6a9beda4eb679ce4eb3803499\": container with ID starting with 22b21180f73509b0eb010c4cbae22f33aa3cd1a6a9beda4eb679ce4eb3803499 not found: ID does not exist" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.216325 4990 scope.go:117] "RemoveContainer" containerID="45eea5d4b0311c3b9c64fb460ad3ed9b839b260bb1790b291ea222832d4538a0" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.224704 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6f2ac73-99f6-4285-8a33-e59ffa88c462-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6f2ac73-99f6-4285-8a33-e59ffa88c462" (UID: "c6f2ac73-99f6-4285-8a33-e59ffa88c462"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.239135 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.255530 4990 scope.go:117] "RemoveContainer" containerID="45eea5d4b0311c3b9c64fb460ad3ed9b839b260bb1790b291ea222832d4538a0" Dec 05 01:34:10 crc kubenswrapper[4990]: E1205 01:34:10.258176 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45eea5d4b0311c3b9c64fb460ad3ed9b839b260bb1790b291ea222832d4538a0\": container with ID starting with 45eea5d4b0311c3b9c64fb460ad3ed9b839b260bb1790b291ea222832d4538a0 not found: ID does not exist" containerID="45eea5d4b0311c3b9c64fb460ad3ed9b839b260bb1790b291ea222832d4538a0" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.258215 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45eea5d4b0311c3b9c64fb460ad3ed9b839b260bb1790b291ea222832d4538a0"} err="failed to get container status \"45eea5d4b0311c3b9c64fb460ad3ed9b839b260bb1790b291ea222832d4538a0\": rpc error: code = NotFound desc = could not find container \"45eea5d4b0311c3b9c64fb460ad3ed9b839b260bb1790b291ea222832d4538a0\": container with ID starting with 45eea5d4b0311c3b9c64fb460ad3ed9b839b260bb1790b291ea222832d4538a0 not found: ID does not exist" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.315109 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/510e9e75-fc35-4bed-8e71-c6e27069f50a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"510e9e75-fc35-4bed-8e71-c6e27069f50a\") " pod="openstack/kube-state-metrics-0" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.315588 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/510e9e75-fc35-4bed-8e71-c6e27069f50a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"510e9e75-fc35-4bed-8e71-c6e27069f50a\") " pod="openstack/kube-state-metrics-0" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.315616 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5vct\" (UniqueName: \"kubernetes.io/projected/510e9e75-fc35-4bed-8e71-c6e27069f50a-kube-api-access-d5vct\") pod \"kube-state-metrics-0\" (UID: \"510e9e75-fc35-4bed-8e71-c6e27069f50a\") " pod="openstack/kube-state-metrics-0" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.315690 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/510e9e75-fc35-4bed-8e71-c6e27069f50a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"510e9e75-fc35-4bed-8e71-c6e27069f50a\") " pod="openstack/kube-state-metrics-0" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.315772 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6f2ac73-99f6-4285-8a33-e59ffa88c462-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.418054 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/510e9e75-fc35-4bed-8e71-c6e27069f50a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"510e9e75-fc35-4bed-8e71-c6e27069f50a\") " pod="openstack/kube-state-metrics-0" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.418122 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5vct\" (UniqueName: \"kubernetes.io/projected/510e9e75-fc35-4bed-8e71-c6e27069f50a-kube-api-access-d5vct\") pod \"kube-state-metrics-0\" (UID: \"510e9e75-fc35-4bed-8e71-c6e27069f50a\") " pod="openstack/kube-state-metrics-0" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.418204 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/510e9e75-fc35-4bed-8e71-c6e27069f50a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"510e9e75-fc35-4bed-8e71-c6e27069f50a\") " pod="openstack/kube-state-metrics-0" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.418253 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/510e9e75-fc35-4bed-8e71-c6e27069f50a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"510e9e75-fc35-4bed-8e71-c6e27069f50a\") " pod="openstack/kube-state-metrics-0" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.423833 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/510e9e75-fc35-4bed-8e71-c6e27069f50a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"510e9e75-fc35-4bed-8e71-c6e27069f50a\") " pod="openstack/kube-state-metrics-0" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.423903 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/510e9e75-fc35-4bed-8e71-c6e27069f50a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"510e9e75-fc35-4bed-8e71-c6e27069f50a\") " pod="openstack/kube-state-metrics-0" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.424298 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/510e9e75-fc35-4bed-8e71-c6e27069f50a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"510e9e75-fc35-4bed-8e71-c6e27069f50a\") " pod="openstack/kube-state-metrics-0" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.439337 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5vct\" (UniqueName: \"kubernetes.io/projected/510e9e75-fc35-4bed-8e71-c6e27069f50a-kube-api-access-d5vct\") pod \"kube-state-metrics-0\" (UID: \"510e9e75-fc35-4bed-8e71-c6e27069f50a\") " pod="openstack/kube-state-metrics-0" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.525840 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 01:34:10 crc kubenswrapper[4990]: I1205 01:34:10.992020 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 01:34:11 crc kubenswrapper[4990]: W1205 01:34:11.001111 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod510e9e75_fc35_4bed_8e71_c6e27069f50a.slice/crio-7f5e007262af24ec50449dee9aaf340a149590a735c49b2f98fbe9e7d8c82214 WatchSource:0}: Error finding container 7f5e007262af24ec50449dee9aaf340a149590a735c49b2f98fbe9e7d8c82214: Status 404 returned error can't find the container with id 7f5e007262af24ec50449dee9aaf340a149590a735c49b2f98fbe9e7d8c82214 Dec 05 01:34:11 crc kubenswrapper[4990]: I1205 01:34:11.046133 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4da319ab-4e7d-4159-b2e8-6cdb92838859","Type":"ContainerStarted","Data":"75cade813ce74a77438f758cdd051dc195af84f1016f58485d9c50a064c859f0"} Dec 05 01:34:11 crc kubenswrapper[4990]: I1205 01:34:11.046533 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4da319ab-4e7d-4159-b2e8-6cdb92838859","Type":"ContainerStarted","Data":"062d889f538d0b4250ccb1dbc1e7cf0bdc758435be71dacec5f027abd0a6bdeb"} Dec 05 01:34:11 crc kubenswrapper[4990]: I1205 01:34:11.048078 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"510e9e75-fc35-4bed-8e71-c6e27069f50a","Type":"ContainerStarted","Data":"7f5e007262af24ec50449dee9aaf340a149590a735c49b2f98fbe9e7d8c82214"} Dec 05 01:34:11 crc kubenswrapper[4990]: I1205 01:34:11.049373 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 01:34:11 crc kubenswrapper[4990]: I1205 01:34:11.070791 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.070773013 podStartE2EDuration="2.070773013s" podCreationTimestamp="2025-12-05 01:34:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:34:11.062265722 +0000 UTC m=+1549.438481083" watchObservedRunningTime="2025-12-05 01:34:11.070773013 +0000 UTC m=+1549.446988384" Dec 05 01:34:11 crc kubenswrapper[4990]: I1205 01:34:11.088085 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 01:34:11 crc kubenswrapper[4990]: I1205 01:34:11.100944 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 05 01:34:11 crc kubenswrapper[4990]: I1205 01:34:11.114407 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 05 01:34:11 crc kubenswrapper[4990]: I1205 01:34:11.116369 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 01:34:11 crc kubenswrapper[4990]: I1205 01:34:11.119148 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 05 01:34:11 crc kubenswrapper[4990]: I1205 01:34:11.130005 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 01:34:11 crc kubenswrapper[4990]: I1205 01:34:11.143828 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:34:11 crc kubenswrapper[4990]: I1205 01:34:11.144153 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cac5c756-5ef1-4022-b7d1-0ffab174325e" containerName="sg-core" containerID="cri-o://77cde06b98d6a489aa6358ccef160cde6ed889dfe54bde7720640d2619a87b92" gracePeriod=30 Dec 05 01:34:11 crc kubenswrapper[4990]: I1205 01:34:11.144174 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cac5c756-5ef1-4022-b7d1-0ffab174325e" containerName="ceilometer-central-agent" containerID="cri-o://d01af1a3c6cf88e1e5ce1192eba05bc64d29ef2e0ecb9035634c11011cb3c8c0" gracePeriod=30 Dec 05 01:34:11 crc kubenswrapper[4990]: I1205 01:34:11.144153 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cac5c756-5ef1-4022-b7d1-0ffab174325e" containerName="proxy-httpd" containerID="cri-o://84934310e34288c78c6c523feef294486047f681fcfe2f5cc82e51e46dcb397b" gracePeriod=30 Dec 05 01:34:11 crc kubenswrapper[4990]: I1205 01:34:11.144291 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cac5c756-5ef1-4022-b7d1-0ffab174325e" containerName="ceilometer-notification-agent" containerID="cri-o://dadd727ac26ef04c99b55cbedf347580a393396ead81d6e085059580daec6073" gracePeriod=30 Dec 05 01:34:11 crc kubenswrapper[4990]: I1205 01:34:11.234081 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c\") " pod="openstack/nova-api-0" Dec 05 01:34:11 crc kubenswrapper[4990]: I1205 01:34:11.234127 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c-logs\") pod \"nova-api-0\" (UID: \"c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c\") " pod="openstack/nova-api-0" Dec 05 01:34:11 crc kubenswrapper[4990]: I1205 01:34:11.234147 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n7zt\" (UniqueName: \"kubernetes.io/projected/c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c-kube-api-access-2n7zt\") pod \"nova-api-0\" (UID: \"c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c\") " pod="openstack/nova-api-0" Dec 05 01:34:11 crc kubenswrapper[4990]: I1205 01:34:11.234185 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c-config-data\") pod \"nova-api-0\" (UID: \"c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c\") " pod="openstack/nova-api-0" Dec 05 01:34:11 crc kubenswrapper[4990]: I1205 01:34:11.335992 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c\") " pod="openstack/nova-api-0" Dec 05 01:34:11 crc kubenswrapper[4990]: I1205 01:34:11.336061 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c-logs\") pod \"nova-api-0\" (UID: \"c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c\") " pod="openstack/nova-api-0" Dec 05 01:34:11 crc kubenswrapper[4990]: I1205 01:34:11.336096 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n7zt\" (UniqueName: \"kubernetes.io/projected/c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c-kube-api-access-2n7zt\") pod \"nova-api-0\" (UID: \"c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c\") " pod="openstack/nova-api-0" Dec 05 01:34:11 crc kubenswrapper[4990]: I1205 01:34:11.336148 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c-config-data\") pod \"nova-api-0\" (UID: \"c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c\") " pod="openstack/nova-api-0" Dec 05 01:34:11 crc kubenswrapper[4990]: I1205 01:34:11.337104 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c-logs\") pod \"nova-api-0\" (UID: \"c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c\") " pod="openstack/nova-api-0" Dec 05 01:34:11 crc kubenswrapper[4990]: I1205 01:34:11.342247 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c\") " pod="openstack/nova-api-0" Dec 05 01:34:11 crc kubenswrapper[4990]: I1205 01:34:11.342738 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c-config-data\") pod \"nova-api-0\" (UID: \"c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c\") " pod="openstack/nova-api-0" Dec 05 01:34:11 crc kubenswrapper[4990]: I1205 01:34:11.356296 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n7zt\" (UniqueName: \"kubernetes.io/projected/c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c-kube-api-access-2n7zt\") pod \"nova-api-0\" (UID: \"c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c\") " pod="openstack/nova-api-0" Dec 05 01:34:11 crc kubenswrapper[4990]: I1205 01:34:11.450621 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 01:34:11 crc kubenswrapper[4990]: I1205 01:34:11.883295 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 01:34:11 crc kubenswrapper[4990]: W1205 01:34:11.884051 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc75d6e07_58ff_4d23_bf8f_af0fbd02dd1c.slice/crio-f9d84b5bb0f88b906be8a6ac31489f666e1a9007e1d6a17c1848d8c11e1faa50 WatchSource:0}: Error finding container f9d84b5bb0f88b906be8a6ac31489f666e1a9007e1d6a17c1848d8c11e1faa50: Status 404 returned error can't find the container with id f9d84b5bb0f88b906be8a6ac31489f666e1a9007e1d6a17c1848d8c11e1faa50 Dec 05 01:34:11 crc kubenswrapper[4990]: I1205 01:34:11.944964 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a53819fb-5ed6-4c06-8b50-9afd98a4ffb7" path="/var/lib/kubelet/pods/a53819fb-5ed6-4c06-8b50-9afd98a4ffb7/volumes" Dec 05 01:34:11 crc kubenswrapper[4990]: I1205 01:34:11.945717 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6f2ac73-99f6-4285-8a33-e59ffa88c462" path="/var/lib/kubelet/pods/c6f2ac73-99f6-4285-8a33-e59ffa88c462/volumes" Dec 05 01:34:12 crc kubenswrapper[4990]: I1205 01:34:12.065779 4990 generic.go:334] "Generic (PLEG): container finished" podID="cac5c756-5ef1-4022-b7d1-0ffab174325e" containerID="84934310e34288c78c6c523feef294486047f681fcfe2f5cc82e51e46dcb397b" exitCode=0 Dec 05 01:34:12 crc kubenswrapper[4990]: I1205 01:34:12.065805 4990 generic.go:334] "Generic (PLEG): container finished" podID="cac5c756-5ef1-4022-b7d1-0ffab174325e" containerID="77cde06b98d6a489aa6358ccef160cde6ed889dfe54bde7720640d2619a87b92" exitCode=2 Dec 05 01:34:12 crc kubenswrapper[4990]: I1205 01:34:12.065814 4990 generic.go:334] "Generic (PLEG): container finished" podID="cac5c756-5ef1-4022-b7d1-0ffab174325e" containerID="d01af1a3c6cf88e1e5ce1192eba05bc64d29ef2e0ecb9035634c11011cb3c8c0" exitCode=0 Dec 05 01:34:12 crc kubenswrapper[4990]: I1205 01:34:12.065860 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cac5c756-5ef1-4022-b7d1-0ffab174325e","Type":"ContainerDied","Data":"84934310e34288c78c6c523feef294486047f681fcfe2f5cc82e51e46dcb397b"} Dec 05 01:34:12 crc kubenswrapper[4990]: I1205 01:34:12.065884 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cac5c756-5ef1-4022-b7d1-0ffab174325e","Type":"ContainerDied","Data":"77cde06b98d6a489aa6358ccef160cde6ed889dfe54bde7720640d2619a87b92"} Dec 05 01:34:12 crc kubenswrapper[4990]: I1205 01:34:12.065894 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cac5c756-5ef1-4022-b7d1-0ffab174325e","Type":"ContainerDied","Data":"d01af1a3c6cf88e1e5ce1192eba05bc64d29ef2e0ecb9035634c11011cb3c8c0"} Dec 05 01:34:12 crc kubenswrapper[4990]: I1205 01:34:12.067922 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c","Type":"ContainerStarted","Data":"f9d84b5bb0f88b906be8a6ac31489f666e1a9007e1d6a17c1848d8c11e1faa50"} Dec 05 01:34:12 crc kubenswrapper[4990]: I1205 01:34:12.069553 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"510e9e75-fc35-4bed-8e71-c6e27069f50a","Type":"ContainerStarted","Data":"96311983bd4bbe76a84ad5addf79e1a778ba9e353f3375b6a5513195e6d9b82e"} Dec 05 01:34:12 crc kubenswrapper[4990]: I1205 01:34:12.096400 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.739259141 podStartE2EDuration="2.096384224s" podCreationTimestamp="2025-12-05 01:34:10 +0000 UTC" firstStartedPulling="2025-12-05 01:34:11.003674469 +0000 UTC m=+1549.379889820" lastFinishedPulling="2025-12-05 01:34:11.360799542 +0000 UTC m=+1549.737014903" observedRunningTime="2025-12-05 01:34:12.088403508 +0000 UTC m=+1550.464618869" watchObservedRunningTime="2025-12-05 01:34:12.096384224 +0000 UTC m=+1550.472599585" Dec 05 01:34:12 crc kubenswrapper[4990]: I1205 01:34:12.425806 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 01:34:12 crc kubenswrapper[4990]: I1205 01:34:12.560061 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v5jh\" (UniqueName: \"kubernetes.io/projected/cac5c756-5ef1-4022-b7d1-0ffab174325e-kube-api-access-6v5jh\") pod \"cac5c756-5ef1-4022-b7d1-0ffab174325e\" (UID: \"cac5c756-5ef1-4022-b7d1-0ffab174325e\") " Dec 05 01:34:12 crc kubenswrapper[4990]: I1205 01:34:12.560165 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cac5c756-5ef1-4022-b7d1-0ffab174325e-config-data\") pod \"cac5c756-5ef1-4022-b7d1-0ffab174325e\" (UID: \"cac5c756-5ef1-4022-b7d1-0ffab174325e\") " Dec 05 01:34:12 crc kubenswrapper[4990]: I1205 01:34:12.560212 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cac5c756-5ef1-4022-b7d1-0ffab174325e-combined-ca-bundle\") pod \"cac5c756-5ef1-4022-b7d1-0ffab174325e\" (UID: \"cac5c756-5ef1-4022-b7d1-0ffab174325e\") " Dec 05 01:34:12 crc kubenswrapper[4990]: I1205 01:34:12.560271 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cac5c756-5ef1-4022-b7d1-0ffab174325e-run-httpd\") pod \"cac5c756-5ef1-4022-b7d1-0ffab174325e\" (UID: \"cac5c756-5ef1-4022-b7d1-0ffab174325e\") " Dec 05 01:34:12 crc kubenswrapper[4990]: I1205 01:34:12.560373 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cac5c756-5ef1-4022-b7d1-0ffab174325e-log-httpd\") pod \"cac5c756-5ef1-4022-b7d1-0ffab174325e\" (UID: \"cac5c756-5ef1-4022-b7d1-0ffab174325e\") " Dec 05 01:34:12 crc kubenswrapper[4990]: I1205 01:34:12.560399 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cac5c756-5ef1-4022-b7d1-0ffab174325e-scripts\") pod \"cac5c756-5ef1-4022-b7d1-0ffab174325e\" (UID: \"cac5c756-5ef1-4022-b7d1-0ffab174325e\") " Dec 05 01:34:12 crc kubenswrapper[4990]: I1205 01:34:12.560440 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cac5c756-5ef1-4022-b7d1-0ffab174325e-sg-core-conf-yaml\") pod \"cac5c756-5ef1-4022-b7d1-0ffab174325e\" (UID: \"cac5c756-5ef1-4022-b7d1-0ffab174325e\") " Dec 05 01:34:12 crc kubenswrapper[4990]: I1205 01:34:12.561055 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cac5c756-5ef1-4022-b7d1-0ffab174325e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cac5c756-5ef1-4022-b7d1-0ffab174325e" (UID: "cac5c756-5ef1-4022-b7d1-0ffab174325e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:34:12 crc kubenswrapper[4990]: I1205 01:34:12.561295 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cac5c756-5ef1-4022-b7d1-0ffab174325e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cac5c756-5ef1-4022-b7d1-0ffab174325e" (UID: "cac5c756-5ef1-4022-b7d1-0ffab174325e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:34:12 crc kubenswrapper[4990]: I1205 01:34:12.564918 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cac5c756-5ef1-4022-b7d1-0ffab174325e-scripts" (OuterVolumeSpecName: "scripts") pod "cac5c756-5ef1-4022-b7d1-0ffab174325e" (UID: "cac5c756-5ef1-4022-b7d1-0ffab174325e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:12 crc kubenswrapper[4990]: I1205 01:34:12.565205 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cac5c756-5ef1-4022-b7d1-0ffab174325e-kube-api-access-6v5jh" (OuterVolumeSpecName: "kube-api-access-6v5jh") pod "cac5c756-5ef1-4022-b7d1-0ffab174325e" (UID: "cac5c756-5ef1-4022-b7d1-0ffab174325e"). InnerVolumeSpecName "kube-api-access-6v5jh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:34:12 crc kubenswrapper[4990]: I1205 01:34:12.596950 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cac5c756-5ef1-4022-b7d1-0ffab174325e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cac5c756-5ef1-4022-b7d1-0ffab174325e" (UID: "cac5c756-5ef1-4022-b7d1-0ffab174325e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:12 crc kubenswrapper[4990]: I1205 01:34:12.651848 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cac5c756-5ef1-4022-b7d1-0ffab174325e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cac5c756-5ef1-4022-b7d1-0ffab174325e" (UID: "cac5c756-5ef1-4022-b7d1-0ffab174325e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:12 crc kubenswrapper[4990]: I1205 01:34:12.662538 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cac5c756-5ef1-4022-b7d1-0ffab174325e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:12 crc kubenswrapper[4990]: I1205 01:34:12.662573 4990 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cac5c756-5ef1-4022-b7d1-0ffab174325e-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:12 crc kubenswrapper[4990]: I1205 01:34:12.662584 4990 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cac5c756-5ef1-4022-b7d1-0ffab174325e-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:12 crc kubenswrapper[4990]: I1205 01:34:12.662593 4990 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cac5c756-5ef1-4022-b7d1-0ffab174325e-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:12 crc kubenswrapper[4990]: I1205 01:34:12.662602 4990 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cac5c756-5ef1-4022-b7d1-0ffab174325e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:12 crc kubenswrapper[4990]: I1205 01:34:12.662612 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v5jh\" (UniqueName: \"kubernetes.io/projected/cac5c756-5ef1-4022-b7d1-0ffab174325e-kube-api-access-6v5jh\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:12 crc kubenswrapper[4990]: I1205 01:34:12.684004 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cac5c756-5ef1-4022-b7d1-0ffab174325e-config-data" (OuterVolumeSpecName: "config-data") pod "cac5c756-5ef1-4022-b7d1-0ffab174325e" (UID: "cac5c756-5ef1-4022-b7d1-0ffab174325e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:12 crc kubenswrapper[4990]: I1205 01:34:12.764133 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cac5c756-5ef1-4022-b7d1-0ffab174325e-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.084011 4990 generic.go:334] "Generic (PLEG): container finished" podID="cac5c756-5ef1-4022-b7d1-0ffab174325e" containerID="dadd727ac26ef04c99b55cbedf347580a393396ead81d6e085059580daec6073" exitCode=0 Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.084077 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cac5c756-5ef1-4022-b7d1-0ffab174325e","Type":"ContainerDied","Data":"dadd727ac26ef04c99b55cbedf347580a393396ead81d6e085059580daec6073"} Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.084379 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cac5c756-5ef1-4022-b7d1-0ffab174325e","Type":"ContainerDied","Data":"d098dfc802727c0ef1fbda59b0cd5fd13cef62fe95715a8a21dfcedf4c9c78bc"} Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.084107 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.084403 4990 scope.go:117] "RemoveContainer" containerID="84934310e34288c78c6c523feef294486047f681fcfe2f5cc82e51e46dcb397b" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.090348 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c","Type":"ContainerStarted","Data":"d3940336277d60ace5f36b2ee414ce777934d853e511b44a3150f66a80829fcb"} Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.090392 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c","Type":"ContainerStarted","Data":"c171b2e78dfbc422122941ee824aec74124b9142b8d725d65407c7b6ccc17792"} Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.090448 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.111591 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.11156803 podStartE2EDuration="2.11156803s" podCreationTimestamp="2025-12-05 01:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:34:13.111328713 +0000 UTC m=+1551.487544074" watchObservedRunningTime="2025-12-05 01:34:13.11156803 +0000 UTC m=+1551.487783401" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.119600 4990 scope.go:117] "RemoveContainer" containerID="77cde06b98d6a489aa6358ccef160cde6ed889dfe54bde7720640d2619a87b92" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.141622 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.155679 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.158064 4990 scope.go:117] "RemoveContainer" containerID="dadd727ac26ef04c99b55cbedf347580a393396ead81d6e085059580daec6073" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.181406 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:34:13 crc kubenswrapper[4990]: E1205 01:34:13.182127 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cac5c756-5ef1-4022-b7d1-0ffab174325e" containerName="sg-core" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.182218 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="cac5c756-5ef1-4022-b7d1-0ffab174325e" containerName="sg-core" Dec 05 01:34:13 crc kubenswrapper[4990]: E1205 01:34:13.182320 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cac5c756-5ef1-4022-b7d1-0ffab174325e" containerName="proxy-httpd" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.182392 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="cac5c756-5ef1-4022-b7d1-0ffab174325e" containerName="proxy-httpd" Dec 05 01:34:13 crc kubenswrapper[4990]: E1205 01:34:13.182513 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cac5c756-5ef1-4022-b7d1-0ffab174325e" containerName="ceilometer-central-agent" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.182592 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="cac5c756-5ef1-4022-b7d1-0ffab174325e" containerName="ceilometer-central-agent" Dec 05 01:34:13 crc kubenswrapper[4990]: E1205 01:34:13.182678 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cac5c756-5ef1-4022-b7d1-0ffab174325e" containerName="ceilometer-notification-agent" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.182749 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="cac5c756-5ef1-4022-b7d1-0ffab174325e" containerName="ceilometer-notification-agent" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.183090 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="cac5c756-5ef1-4022-b7d1-0ffab174325e" containerName="sg-core" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.183178 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="cac5c756-5ef1-4022-b7d1-0ffab174325e" containerName="ceilometer-notification-agent" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.183275 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="cac5c756-5ef1-4022-b7d1-0ffab174325e" containerName="ceilometer-central-agent" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.183352 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="cac5c756-5ef1-4022-b7d1-0ffab174325e" containerName="proxy-httpd" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.185746 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.190715 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.190790 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.202651 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.217470 4990 scope.go:117] "RemoveContainer" containerID="d01af1a3c6cf88e1e5ce1192eba05bc64d29ef2e0ecb9035634c11011cb3c8c0" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.242822 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.251500 4990 scope.go:117] "RemoveContainer" containerID="84934310e34288c78c6c523feef294486047f681fcfe2f5cc82e51e46dcb397b" Dec 05 01:34:13 crc kubenswrapper[4990]: E1205 01:34:13.254965 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84934310e34288c78c6c523feef294486047f681fcfe2f5cc82e51e46dcb397b\": container with ID starting with 84934310e34288c78c6c523feef294486047f681fcfe2f5cc82e51e46dcb397b not found: ID does not exist" containerID="84934310e34288c78c6c523feef294486047f681fcfe2f5cc82e51e46dcb397b" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.255010 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84934310e34288c78c6c523feef294486047f681fcfe2f5cc82e51e46dcb397b"} err="failed to get container status \"84934310e34288c78c6c523feef294486047f681fcfe2f5cc82e51e46dcb397b\": rpc error: code = NotFound desc = could not find container \"84934310e34288c78c6c523feef294486047f681fcfe2f5cc82e51e46dcb397b\": container with ID starting with 84934310e34288c78c6c523feef294486047f681fcfe2f5cc82e51e46dcb397b not found: ID does not exist" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.255036 4990 scope.go:117] "RemoveContainer" containerID="77cde06b98d6a489aa6358ccef160cde6ed889dfe54bde7720640d2619a87b92" Dec 05 01:34:13 crc kubenswrapper[4990]: E1205 01:34:13.255454 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77cde06b98d6a489aa6358ccef160cde6ed889dfe54bde7720640d2619a87b92\": container with ID starting with 77cde06b98d6a489aa6358ccef160cde6ed889dfe54bde7720640d2619a87b92 not found: ID does not exist" containerID="77cde06b98d6a489aa6358ccef160cde6ed889dfe54bde7720640d2619a87b92" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.255511 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77cde06b98d6a489aa6358ccef160cde6ed889dfe54bde7720640d2619a87b92"} err="failed to get container status \"77cde06b98d6a489aa6358ccef160cde6ed889dfe54bde7720640d2619a87b92\": rpc error: code = NotFound desc = could not find container \"77cde06b98d6a489aa6358ccef160cde6ed889dfe54bde7720640d2619a87b92\": container with ID starting with 77cde06b98d6a489aa6358ccef160cde6ed889dfe54bde7720640d2619a87b92 not found: ID does not exist" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.255548 4990 scope.go:117] "RemoveContainer" containerID="dadd727ac26ef04c99b55cbedf347580a393396ead81d6e085059580daec6073" Dec 05 01:34:13 crc kubenswrapper[4990]: E1205 01:34:13.256030 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dadd727ac26ef04c99b55cbedf347580a393396ead81d6e085059580daec6073\": container with ID starting with dadd727ac26ef04c99b55cbedf347580a393396ead81d6e085059580daec6073 not found: ID does not exist" containerID="dadd727ac26ef04c99b55cbedf347580a393396ead81d6e085059580daec6073" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.256125 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dadd727ac26ef04c99b55cbedf347580a393396ead81d6e085059580daec6073"} err="failed to get container status \"dadd727ac26ef04c99b55cbedf347580a393396ead81d6e085059580daec6073\": rpc error: code = NotFound desc = could not find container \"dadd727ac26ef04c99b55cbedf347580a393396ead81d6e085059580daec6073\": container with ID starting with dadd727ac26ef04c99b55cbedf347580a393396ead81d6e085059580daec6073 not found: ID does not exist" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.256206 4990 scope.go:117] "RemoveContainer" containerID="d01af1a3c6cf88e1e5ce1192eba05bc64d29ef2e0ecb9035634c11011cb3c8c0" Dec 05 01:34:13 crc kubenswrapper[4990]: E1205 01:34:13.256872 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d01af1a3c6cf88e1e5ce1192eba05bc64d29ef2e0ecb9035634c11011cb3c8c0\": container with ID starting with d01af1a3c6cf88e1e5ce1192eba05bc64d29ef2e0ecb9035634c11011cb3c8c0 not found: ID does not exist" containerID="d01af1a3c6cf88e1e5ce1192eba05bc64d29ef2e0ecb9035634c11011cb3c8c0" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.256896 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d01af1a3c6cf88e1e5ce1192eba05bc64d29ef2e0ecb9035634c11011cb3c8c0"} err="failed to get container status \"d01af1a3c6cf88e1e5ce1192eba05bc64d29ef2e0ecb9035634c11011cb3c8c0\": rpc error: code = NotFound desc = could not find container \"d01af1a3c6cf88e1e5ce1192eba05bc64d29ef2e0ecb9035634c11011cb3c8c0\": container with ID starting with d01af1a3c6cf88e1e5ce1192eba05bc64d29ef2e0ecb9035634c11011cb3c8c0 not found: ID does not exist" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.376890 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a69cb0f1-d7a0-41f5-86e0-29182919b047-config-data\") pod \"ceilometer-0\" (UID: \"a69cb0f1-d7a0-41f5-86e0-29182919b047\") " pod="openstack/ceilometer-0" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.376928 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a69cb0f1-d7a0-41f5-86e0-29182919b047-log-httpd\") pod \"ceilometer-0\" (UID: \"a69cb0f1-d7a0-41f5-86e0-29182919b047\") " pod="openstack/ceilometer-0" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.376996 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a69cb0f1-d7a0-41f5-86e0-29182919b047-run-httpd\") pod \"ceilometer-0\" (UID: \"a69cb0f1-d7a0-41f5-86e0-29182919b047\") " pod="openstack/ceilometer-0" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.377147 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p54bq\" (UniqueName: \"kubernetes.io/projected/a69cb0f1-d7a0-41f5-86e0-29182919b047-kube-api-access-p54bq\") pod \"ceilometer-0\" (UID: \"a69cb0f1-d7a0-41f5-86e0-29182919b047\") " pod="openstack/ceilometer-0" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.377315 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a69cb0f1-d7a0-41f5-86e0-29182919b047-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a69cb0f1-d7a0-41f5-86e0-29182919b047\") " pod="openstack/ceilometer-0" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.377362 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a69cb0f1-d7a0-41f5-86e0-29182919b047-scripts\") pod \"ceilometer-0\" (UID: \"a69cb0f1-d7a0-41f5-86e0-29182919b047\") " pod="openstack/ceilometer-0" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.377425 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a69cb0f1-d7a0-41f5-86e0-29182919b047-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a69cb0f1-d7a0-41f5-86e0-29182919b047\") " pod="openstack/ceilometer-0" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.377495 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a69cb0f1-d7a0-41f5-86e0-29182919b047-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a69cb0f1-d7a0-41f5-86e0-29182919b047\") " pod="openstack/ceilometer-0" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.479950 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a69cb0f1-d7a0-41f5-86e0-29182919b047-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a69cb0f1-d7a0-41f5-86e0-29182919b047\") " pod="openstack/ceilometer-0" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.480001 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a69cb0f1-d7a0-41f5-86e0-29182919b047-scripts\") pod \"ceilometer-0\" (UID: \"a69cb0f1-d7a0-41f5-86e0-29182919b047\") " pod="openstack/ceilometer-0" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.480036 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a69cb0f1-d7a0-41f5-86e0-29182919b047-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a69cb0f1-d7a0-41f5-86e0-29182919b047\") " pod="openstack/ceilometer-0" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.480084 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a69cb0f1-d7a0-41f5-86e0-29182919b047-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a69cb0f1-d7a0-41f5-86e0-29182919b047\") " pod="openstack/ceilometer-0" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.480107 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a69cb0f1-d7a0-41f5-86e0-29182919b047-config-data\") pod \"ceilometer-0\" (UID: \"a69cb0f1-d7a0-41f5-86e0-29182919b047\") " pod="openstack/ceilometer-0" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.480129 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a69cb0f1-d7a0-41f5-86e0-29182919b047-log-httpd\") pod \"ceilometer-0\" (UID: \"a69cb0f1-d7a0-41f5-86e0-29182919b047\") " pod="openstack/ceilometer-0" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.480174 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a69cb0f1-d7a0-41f5-86e0-29182919b047-run-httpd\") pod \"ceilometer-0\" (UID: \"a69cb0f1-d7a0-41f5-86e0-29182919b047\") " pod="openstack/ceilometer-0" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.480199 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p54bq\" (UniqueName: \"kubernetes.io/projected/a69cb0f1-d7a0-41f5-86e0-29182919b047-kube-api-access-p54bq\") pod \"ceilometer-0\" (UID: \"a69cb0f1-d7a0-41f5-86e0-29182919b047\") " pod="openstack/ceilometer-0" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.480891 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a69cb0f1-d7a0-41f5-86e0-29182919b047-log-httpd\") pod \"ceilometer-0\" (UID: \"a69cb0f1-d7a0-41f5-86e0-29182919b047\") " pod="openstack/ceilometer-0" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.481128 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a69cb0f1-d7a0-41f5-86e0-29182919b047-run-httpd\") pod \"ceilometer-0\" (UID: \"a69cb0f1-d7a0-41f5-86e0-29182919b047\") " pod="openstack/ceilometer-0" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.485932 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a69cb0f1-d7a0-41f5-86e0-29182919b047-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a69cb0f1-d7a0-41f5-86e0-29182919b047\") " pod="openstack/ceilometer-0" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.486639 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a69cb0f1-d7a0-41f5-86e0-29182919b047-scripts\") pod \"ceilometer-0\" (UID: \"a69cb0f1-d7a0-41f5-86e0-29182919b047\") " pod="openstack/ceilometer-0" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.487345 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a69cb0f1-d7a0-41f5-86e0-29182919b047-config-data\") pod \"ceilometer-0\" (UID: \"a69cb0f1-d7a0-41f5-86e0-29182919b047\") " pod="openstack/ceilometer-0" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.488321 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a69cb0f1-d7a0-41f5-86e0-29182919b047-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a69cb0f1-d7a0-41f5-86e0-29182919b047\") " pod="openstack/ceilometer-0" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.489222 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a69cb0f1-d7a0-41f5-86e0-29182919b047-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a69cb0f1-d7a0-41f5-86e0-29182919b047\") " pod="openstack/ceilometer-0" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.498702 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p54bq\" (UniqueName: \"kubernetes.io/projected/a69cb0f1-d7a0-41f5-86e0-29182919b047-kube-api-access-p54bq\") pod \"ceilometer-0\" (UID: \"a69cb0f1-d7a0-41f5-86e0-29182919b047\") " pod="openstack/ceilometer-0" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.511575 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.941060 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cac5c756-5ef1-4022-b7d1-0ffab174325e" path="/var/lib/kubelet/pods/cac5c756-5ef1-4022-b7d1-0ffab174325e/volumes" Dec 05 01:34:13 crc kubenswrapper[4990]: I1205 01:34:13.967971 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:34:13 crc kubenswrapper[4990]: W1205 01:34:13.968576 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda69cb0f1_d7a0_41f5_86e0_29182919b047.slice/crio-dec689f15ff18b92a8e76cd6c565e78225153d6ab444cf409c48ac7c92a53883 WatchSource:0}: Error finding container dec689f15ff18b92a8e76cd6c565e78225153d6ab444cf409c48ac7c92a53883: Status 404 returned error can't find the container with id dec689f15ff18b92a8e76cd6c565e78225153d6ab444cf409c48ac7c92a53883 Dec 05 01:34:14 crc kubenswrapper[4990]: I1205 01:34:14.100827 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a69cb0f1-d7a0-41f5-86e0-29182919b047","Type":"ContainerStarted","Data":"dec689f15ff18b92a8e76cd6c565e78225153d6ab444cf409c48ac7c92a53883"} Dec 05 01:34:14 crc kubenswrapper[4990]: I1205 01:34:14.395901 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 05 01:34:14 crc kubenswrapper[4990]: I1205 01:34:14.716490 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 05 01:34:15 crc kubenswrapper[4990]: I1205 01:34:15.119728 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a69cb0f1-d7a0-41f5-86e0-29182919b047","Type":"ContainerStarted","Data":"10eec2287582c865a5528fe4bd1fc1cdf006a2a1d9802edb4aaa7b2b6ecbb352"} Dec 05 01:34:16 crc kubenswrapper[4990]: I1205 01:34:16.137026 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a69cb0f1-d7a0-41f5-86e0-29182919b047","Type":"ContainerStarted","Data":"0ad15248b91bfd8807d6e4fe9afdeb1651e986c9d31288af1d458c42035b77b5"} Dec 05 01:34:16 crc kubenswrapper[4990]: I1205 01:34:16.137322 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a69cb0f1-d7a0-41f5-86e0-29182919b047","Type":"ContainerStarted","Data":"5bf6752ff64cf6a92921e307e7c2fbc62cec51da91fc7002fb308151859faa2a"} Dec 05 01:34:18 crc kubenswrapper[4990]: I1205 01:34:18.181332 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a69cb0f1-d7a0-41f5-86e0-29182919b047","Type":"ContainerStarted","Data":"0e1a8495f4dbb1fbfda1371b96ecf67f978a97ca511f8f902b551fb5cf53c709"} Dec 05 01:34:18 crc kubenswrapper[4990]: I1205 01:34:18.181724 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 01:34:18 crc kubenswrapper[4990]: I1205 01:34:18.217867 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.23922036 podStartE2EDuration="5.217834496s" podCreationTimestamp="2025-12-05 01:34:13 +0000 UTC" firstStartedPulling="2025-12-05 01:34:13.971408467 +0000 UTC m=+1552.347623838" lastFinishedPulling="2025-12-05 01:34:16.950022613 +0000 UTC m=+1555.326237974" observedRunningTime="2025-12-05 01:34:18.21091181 +0000 UTC m=+1556.587127171" watchObservedRunningTime="2025-12-05 01:34:18.217834496 +0000 UTC m=+1556.594049857" Dec 05 01:34:19 crc kubenswrapper[4990]: I1205 01:34:19.715916 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 05 01:34:19 crc kubenswrapper[4990]: I1205 01:34:19.752733 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 05 01:34:20 crc kubenswrapper[4990]: I1205 01:34:20.249088 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 05 01:34:20 crc kubenswrapper[4990]: I1205 01:34:20.536705 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 05 01:34:21 crc kubenswrapper[4990]: I1205 01:34:21.451502 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 01:34:21 crc kubenswrapper[4990]: I1205 01:34:21.452282 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 01:34:22 crc kubenswrapper[4990]: I1205 01:34:22.534667 4990 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.189:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 01:34:22 crc kubenswrapper[4990]: I1205 01:34:22.534695 4990 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.189:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 01:34:28 crc kubenswrapper[4990]: I1205 01:34:28.291197 4990 generic.go:334] "Generic (PLEG): container finished" podID="78e42b57-0a6f-4dc0-81b2-729838f73c91" containerID="d2611f2ce302131d0e577518b7875dd5f7375dc9cdc0ac8ff32dc7752b171c3f" exitCode=137 Dec 05 01:34:28 crc kubenswrapper[4990]: I1205 01:34:28.291301 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"78e42b57-0a6f-4dc0-81b2-729838f73c91","Type":"ContainerDied","Data":"d2611f2ce302131d0e577518b7875dd5f7375dc9cdc0ac8ff32dc7752b171c3f"} Dec 05 01:34:28 crc kubenswrapper[4990]: I1205 01:34:28.295836 4990 generic.go:334] "Generic (PLEG): container finished" podID="bb674288-65cb-4d21-ae86-24f620603203" containerID="33e18624487a0649c17fcc4bee3e82daec971c85efad2a6add23dc9afee24ab3" exitCode=137 Dec 05 01:34:28 crc kubenswrapper[4990]: I1205 01:34:28.295895 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb674288-65cb-4d21-ae86-24f620603203","Type":"ContainerDied","Data":"33e18624487a0649c17fcc4bee3e82daec971c85efad2a6add23dc9afee24ab3"} Dec 05 01:34:28 crc kubenswrapper[4990]: I1205 01:34:28.295937 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb674288-65cb-4d21-ae86-24f620603203","Type":"ContainerDied","Data":"4f4dd91215ca3692e21def5ea257fe53c96661b731978f498fdf8af9ff295eef"} Dec 05 01:34:28 crc kubenswrapper[4990]: I1205 01:34:28.295957 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f4dd91215ca3692e21def5ea257fe53c96661b731978f498fdf8af9ff295eef" Dec 05 01:34:28 crc kubenswrapper[4990]: I1205 01:34:28.371129 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 01:34:28 crc kubenswrapper[4990]: I1205 01:34:28.376639 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:34:28 crc kubenswrapper[4990]: I1205 01:34:28.422177 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb674288-65cb-4d21-ae86-24f620603203-combined-ca-bundle\") pod \"bb674288-65cb-4d21-ae86-24f620603203\" (UID: \"bb674288-65cb-4d21-ae86-24f620603203\") " Dec 05 01:34:28 crc kubenswrapper[4990]: I1205 01:34:28.422299 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb674288-65cb-4d21-ae86-24f620603203-config-data\") pod \"bb674288-65cb-4d21-ae86-24f620603203\" (UID: \"bb674288-65cb-4d21-ae86-24f620603203\") " Dec 05 01:34:28 crc kubenswrapper[4990]: I1205 01:34:28.422350 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e42b57-0a6f-4dc0-81b2-729838f73c91-config-data\") pod \"78e42b57-0a6f-4dc0-81b2-729838f73c91\" (UID: \"78e42b57-0a6f-4dc0-81b2-729838f73c91\") " Dec 05 01:34:28 crc kubenswrapper[4990]: I1205 01:34:28.422393 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e42b57-0a6f-4dc0-81b2-729838f73c91-combined-ca-bundle\") pod \"78e42b57-0a6f-4dc0-81b2-729838f73c91\" (UID: \"78e42b57-0a6f-4dc0-81b2-729838f73c91\") " Dec 05 01:34:28 crc kubenswrapper[4990]: I1205 01:34:28.422441 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb674288-65cb-4d21-ae86-24f620603203-logs\") pod \"bb674288-65cb-4d21-ae86-24f620603203\" (UID: \"bb674288-65cb-4d21-ae86-24f620603203\") " Dec 05 01:34:28 crc kubenswrapper[4990]: I1205 01:34:28.422508 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjl2b\" (UniqueName: \"kubernetes.io/projected/78e42b57-0a6f-4dc0-81b2-729838f73c91-kube-api-access-jjl2b\") pod \"78e42b57-0a6f-4dc0-81b2-729838f73c91\" (UID: \"78e42b57-0a6f-4dc0-81b2-729838f73c91\") " Dec 05 01:34:28 crc kubenswrapper[4990]: I1205 01:34:28.422577 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn6cr\" (UniqueName: \"kubernetes.io/projected/bb674288-65cb-4d21-ae86-24f620603203-kube-api-access-nn6cr\") pod \"bb674288-65cb-4d21-ae86-24f620603203\" (UID: \"bb674288-65cb-4d21-ae86-24f620603203\") " Dec 05 01:34:28 crc kubenswrapper[4990]: I1205 01:34:28.423319 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb674288-65cb-4d21-ae86-24f620603203-logs" (OuterVolumeSpecName: "logs") pod "bb674288-65cb-4d21-ae86-24f620603203" (UID: "bb674288-65cb-4d21-ae86-24f620603203"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:34:28 crc kubenswrapper[4990]: I1205 01:34:28.427861 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb674288-65cb-4d21-ae86-24f620603203-kube-api-access-nn6cr" (OuterVolumeSpecName: "kube-api-access-nn6cr") pod "bb674288-65cb-4d21-ae86-24f620603203" (UID: "bb674288-65cb-4d21-ae86-24f620603203"). InnerVolumeSpecName "kube-api-access-nn6cr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:34:28 crc kubenswrapper[4990]: I1205 01:34:28.428070 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78e42b57-0a6f-4dc0-81b2-729838f73c91-kube-api-access-jjl2b" (OuterVolumeSpecName: "kube-api-access-jjl2b") pod "78e42b57-0a6f-4dc0-81b2-729838f73c91" (UID: "78e42b57-0a6f-4dc0-81b2-729838f73c91"). InnerVolumeSpecName "kube-api-access-jjl2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:34:28 crc kubenswrapper[4990]: I1205 01:34:28.452456 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb674288-65cb-4d21-ae86-24f620603203-config-data" (OuterVolumeSpecName: "config-data") pod "bb674288-65cb-4d21-ae86-24f620603203" (UID: "bb674288-65cb-4d21-ae86-24f620603203"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:28 crc kubenswrapper[4990]: I1205 01:34:28.452746 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78e42b57-0a6f-4dc0-81b2-729838f73c91-config-data" (OuterVolumeSpecName: "config-data") pod "78e42b57-0a6f-4dc0-81b2-729838f73c91" (UID: "78e42b57-0a6f-4dc0-81b2-729838f73c91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:28 crc kubenswrapper[4990]: I1205 01:34:28.458832 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb674288-65cb-4d21-ae86-24f620603203-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb674288-65cb-4d21-ae86-24f620603203" (UID: "bb674288-65cb-4d21-ae86-24f620603203"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:28 crc kubenswrapper[4990]: I1205 01:34:28.465564 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78e42b57-0a6f-4dc0-81b2-729838f73c91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78e42b57-0a6f-4dc0-81b2-729838f73c91" (UID: "78e42b57-0a6f-4dc0-81b2-729838f73c91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:28 crc kubenswrapper[4990]: I1205 01:34:28.523501 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb674288-65cb-4d21-ae86-24f620603203-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:28 crc kubenswrapper[4990]: I1205 01:34:28.523533 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e42b57-0a6f-4dc0-81b2-729838f73c91-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:28 crc kubenswrapper[4990]: I1205 01:34:28.523542 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e42b57-0a6f-4dc0-81b2-729838f73c91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:28 crc kubenswrapper[4990]: I1205 01:34:28.523552 4990 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb674288-65cb-4d21-ae86-24f620603203-logs\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:28 crc kubenswrapper[4990]: I1205 01:34:28.523561 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjl2b\" (UniqueName: \"kubernetes.io/projected/78e42b57-0a6f-4dc0-81b2-729838f73c91-kube-api-access-jjl2b\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:28 crc kubenswrapper[4990]: I1205 01:34:28.523570 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nn6cr\" (UniqueName: \"kubernetes.io/projected/bb674288-65cb-4d21-ae86-24f620603203-kube-api-access-nn6cr\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:28 crc kubenswrapper[4990]: I1205 01:34:28.523580 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb674288-65cb-4d21-ae86-24f620603203-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.306838 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.306846 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"78e42b57-0a6f-4dc0-81b2-729838f73c91","Type":"ContainerDied","Data":"fe6d564e79d52efa5f3bbe2a381a77538c3005d3128d2359391f826d1803c180"} Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.306928 4990 scope.go:117] "RemoveContainer" containerID="d2611f2ce302131d0e577518b7875dd5f7375dc9cdc0ac8ff32dc7752b171c3f" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.306840 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.346196 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.363624 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.378566 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.404578 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.427991 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 05 01:34:29 crc kubenswrapper[4990]: E1205 01:34:29.428512 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb674288-65cb-4d21-ae86-24f620603203" containerName="nova-metadata-log" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.428557 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb674288-65cb-4d21-ae86-24f620603203" containerName="nova-metadata-log" Dec 05 01:34:29 crc kubenswrapper[4990]: E1205 01:34:29.428575 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78e42b57-0a6f-4dc0-81b2-729838f73c91" containerName="nova-cell1-novncproxy-novncproxy" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.428584 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="78e42b57-0a6f-4dc0-81b2-729838f73c91" containerName="nova-cell1-novncproxy-novncproxy" Dec 05 01:34:29 crc kubenswrapper[4990]: E1205 01:34:29.428602 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb674288-65cb-4d21-ae86-24f620603203" containerName="nova-metadata-metadata" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.428610 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb674288-65cb-4d21-ae86-24f620603203" containerName="nova-metadata-metadata" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.428860 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb674288-65cb-4d21-ae86-24f620603203" containerName="nova-metadata-log" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.428895 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="78e42b57-0a6f-4dc0-81b2-729838f73c91" containerName="nova-cell1-novncproxy-novncproxy" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.428917 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb674288-65cb-4d21-ae86-24f620603203" containerName="nova-metadata-metadata" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.430224 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.432166 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.433405 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.441847 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.450642 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.452073 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.454008 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.454619 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.455444 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.464169 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.542377 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6df8299-697f-4c4e-a0f6-821ff4261eb2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f6df8299-697f-4c4e-a0f6-821ff4261eb2\") " pod="openstack/nova-metadata-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.542425 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6df8299-697f-4c4e-a0f6-821ff4261eb2-logs\") pod \"nova-metadata-0\" (UID: \"f6df8299-697f-4c4e-a0f6-821ff4261eb2\") " pod="openstack/nova-metadata-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.542842 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6df8299-697f-4c4e-a0f6-821ff4261eb2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f6df8299-697f-4c4e-a0f6-821ff4261eb2\") " pod="openstack/nova-metadata-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.542953 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6df8299-697f-4c4e-a0f6-821ff4261eb2-config-data\") pod \"nova-metadata-0\" (UID: \"f6df8299-697f-4c4e-a0f6-821ff4261eb2\") " pod="openstack/nova-metadata-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.542987 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx4hx\" (UniqueName: \"kubernetes.io/projected/f6df8299-697f-4c4e-a0f6-821ff4261eb2-kube-api-access-wx4hx\") pod \"nova-metadata-0\" (UID: \"f6df8299-697f-4c4e-a0f6-821ff4261eb2\") " pod="openstack/nova-metadata-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.644596 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6df8299-697f-4c4e-a0f6-821ff4261eb2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f6df8299-697f-4c4e-a0f6-821ff4261eb2\") " pod="openstack/nova-metadata-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.644641 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4182c8b1-5c4d-4f6b-aeca-9492abf6069e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4182c8b1-5c4d-4f6b-aeca-9492abf6069e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.644672 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6df8299-697f-4c4e-a0f6-821ff4261eb2-logs\") pod \"nova-metadata-0\" (UID: \"f6df8299-697f-4c4e-a0f6-821ff4261eb2\") " pod="openstack/nova-metadata-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.644726 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4182c8b1-5c4d-4f6b-aeca-9492abf6069e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4182c8b1-5c4d-4f6b-aeca-9492abf6069e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.645132 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4182c8b1-5c4d-4f6b-aeca-9492abf6069e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4182c8b1-5c4d-4f6b-aeca-9492abf6069e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.645334 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6df8299-697f-4c4e-a0f6-821ff4261eb2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f6df8299-697f-4c4e-a0f6-821ff4261eb2\") " pod="openstack/nova-metadata-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.645365 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vdnw\" (UniqueName: \"kubernetes.io/projected/4182c8b1-5c4d-4f6b-aeca-9492abf6069e-kube-api-access-5vdnw\") pod \"nova-cell1-novncproxy-0\" (UID: \"4182c8b1-5c4d-4f6b-aeca-9492abf6069e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.645472 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6df8299-697f-4c4e-a0f6-821ff4261eb2-logs\") pod \"nova-metadata-0\" (UID: \"f6df8299-697f-4c4e-a0f6-821ff4261eb2\") " pod="openstack/nova-metadata-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.645755 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6df8299-697f-4c4e-a0f6-821ff4261eb2-config-data\") pod \"nova-metadata-0\" (UID: \"f6df8299-697f-4c4e-a0f6-821ff4261eb2\") " pod="openstack/nova-metadata-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.645858 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx4hx\" (UniqueName: \"kubernetes.io/projected/f6df8299-697f-4c4e-a0f6-821ff4261eb2-kube-api-access-wx4hx\") pod \"nova-metadata-0\" (UID: \"f6df8299-697f-4c4e-a0f6-821ff4261eb2\") " pod="openstack/nova-metadata-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.645994 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4182c8b1-5c4d-4f6b-aeca-9492abf6069e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4182c8b1-5c4d-4f6b-aeca-9492abf6069e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.650497 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6df8299-697f-4c4e-a0f6-821ff4261eb2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f6df8299-697f-4c4e-a0f6-821ff4261eb2\") " pod="openstack/nova-metadata-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.650559 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6df8299-697f-4c4e-a0f6-821ff4261eb2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f6df8299-697f-4c4e-a0f6-821ff4261eb2\") " pod="openstack/nova-metadata-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.651033 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6df8299-697f-4c4e-a0f6-821ff4261eb2-config-data\") pod \"nova-metadata-0\" (UID: \"f6df8299-697f-4c4e-a0f6-821ff4261eb2\") " pod="openstack/nova-metadata-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.661293 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx4hx\" (UniqueName: \"kubernetes.io/projected/f6df8299-697f-4c4e-a0f6-821ff4261eb2-kube-api-access-wx4hx\") pod \"nova-metadata-0\" (UID: \"f6df8299-697f-4c4e-a0f6-821ff4261eb2\") " pod="openstack/nova-metadata-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.747313 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4182c8b1-5c4d-4f6b-aeca-9492abf6069e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4182c8b1-5c4d-4f6b-aeca-9492abf6069e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.747368 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4182c8b1-5c4d-4f6b-aeca-9492abf6069e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4182c8b1-5c4d-4f6b-aeca-9492abf6069e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.747420 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4182c8b1-5c4d-4f6b-aeca-9492abf6069e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4182c8b1-5c4d-4f6b-aeca-9492abf6069e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.747441 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4182c8b1-5c4d-4f6b-aeca-9492abf6069e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4182c8b1-5c4d-4f6b-aeca-9492abf6069e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.747468 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vdnw\" (UniqueName: \"kubernetes.io/projected/4182c8b1-5c4d-4f6b-aeca-9492abf6069e-kube-api-access-5vdnw\") pod \"nova-cell1-novncproxy-0\" (UID: \"4182c8b1-5c4d-4f6b-aeca-9492abf6069e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.750754 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.763263 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4182c8b1-5c4d-4f6b-aeca-9492abf6069e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4182c8b1-5c4d-4f6b-aeca-9492abf6069e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.763822 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4182c8b1-5c4d-4f6b-aeca-9492abf6069e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4182c8b1-5c4d-4f6b-aeca-9492abf6069e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.763827 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4182c8b1-5c4d-4f6b-aeca-9492abf6069e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4182c8b1-5c4d-4f6b-aeca-9492abf6069e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.764103 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4182c8b1-5c4d-4f6b-aeca-9492abf6069e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4182c8b1-5c4d-4f6b-aeca-9492abf6069e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.769645 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vdnw\" (UniqueName: \"kubernetes.io/projected/4182c8b1-5c4d-4f6b-aeca-9492abf6069e-kube-api-access-5vdnw\") pod \"nova-cell1-novncproxy-0\" (UID: \"4182c8b1-5c4d-4f6b-aeca-9492abf6069e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.770376 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.944615 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78e42b57-0a6f-4dc0-81b2-729838f73c91" path="/var/lib/kubelet/pods/78e42b57-0a6f-4dc0-81b2-729838f73c91/volumes" Dec 05 01:34:29 crc kubenswrapper[4990]: I1205 01:34:29.945842 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb674288-65cb-4d21-ae86-24f620603203" path="/var/lib/kubelet/pods/bb674288-65cb-4d21-ae86-24f620603203/volumes" Dec 05 01:34:30 crc kubenswrapper[4990]: I1205 01:34:30.210729 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 01:34:30 crc kubenswrapper[4990]: W1205 01:34:30.215634 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4182c8b1_5c4d_4f6b_aeca_9492abf6069e.slice/crio-dbdf5d647f21e3b7db61caa3988c4057d728ad26a3e44ab1627d67a619ebeb7f WatchSource:0}: Error finding container dbdf5d647f21e3b7db61caa3988c4057d728ad26a3e44ab1627d67a619ebeb7f: Status 404 returned error can't find the container with id dbdf5d647f21e3b7db61caa3988c4057d728ad26a3e44ab1627d67a619ebeb7f Dec 05 01:34:30 crc kubenswrapper[4990]: W1205 01:34:30.218884 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6df8299_697f_4c4e_a0f6_821ff4261eb2.slice/crio-9a5c4130e6707b19c75c1f053256afacb144b2adb6b63c1b01a4fc4292aa65a0 WatchSource:0}: Error finding container 9a5c4130e6707b19c75c1f053256afacb144b2adb6b63c1b01a4fc4292aa65a0: Status 404 returned error can't find the container with id 9a5c4130e6707b19c75c1f053256afacb144b2adb6b63c1b01a4fc4292aa65a0 Dec 05 01:34:30 crc kubenswrapper[4990]: I1205 01:34:30.219089 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 01:34:30 crc kubenswrapper[4990]: I1205 01:34:30.321741 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4182c8b1-5c4d-4f6b-aeca-9492abf6069e","Type":"ContainerStarted","Data":"dbdf5d647f21e3b7db61caa3988c4057d728ad26a3e44ab1627d67a619ebeb7f"} Dec 05 01:34:30 crc kubenswrapper[4990]: I1205 01:34:30.323758 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f6df8299-697f-4c4e-a0f6-821ff4261eb2","Type":"ContainerStarted","Data":"9a5c4130e6707b19c75c1f053256afacb144b2adb6b63c1b01a4fc4292aa65a0"} Dec 05 01:34:31 crc kubenswrapper[4990]: I1205 01:34:31.352554 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f6df8299-697f-4c4e-a0f6-821ff4261eb2","Type":"ContainerStarted","Data":"a83af7cea079fbdb14759ffdc7053823aa6cb90b13718f66c10ac757ffe51b20"} Dec 05 01:34:31 crc kubenswrapper[4990]: I1205 01:34:31.353896 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f6df8299-697f-4c4e-a0f6-821ff4261eb2","Type":"ContainerStarted","Data":"95fdc2e2a069e3ea5d79973332eacf244af04019c0840d30d89d61965373560d"} Dec 05 01:34:31 crc kubenswrapper[4990]: I1205 01:34:31.355844 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4182c8b1-5c4d-4f6b-aeca-9492abf6069e","Type":"ContainerStarted","Data":"36b542039ff8a88aa766414141937ec39af2f0202d964f9ecf53fabdb9300f96"} Dec 05 01:34:31 crc kubenswrapper[4990]: I1205 01:34:31.392157 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.392129219 podStartE2EDuration="2.392129219s" podCreationTimestamp="2025-12-05 01:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:34:31.368779286 +0000 UTC m=+1569.744994647" watchObservedRunningTime="2025-12-05 01:34:31.392129219 +0000 UTC m=+1569.768344580" Dec 05 01:34:31 crc kubenswrapper[4990]: I1205 01:34:31.410520 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.410458439 podStartE2EDuration="2.410458439s" podCreationTimestamp="2025-12-05 01:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:34:31.395725301 +0000 UTC m=+1569.771940682" watchObservedRunningTime="2025-12-05 01:34:31.410458439 +0000 UTC m=+1569.786673810" Dec 05 01:34:31 crc kubenswrapper[4990]: I1205 01:34:31.455251 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 05 01:34:31 crc kubenswrapper[4990]: I1205 01:34:31.455444 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 05 01:34:31 crc kubenswrapper[4990]: I1205 01:34:31.455840 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 05 01:34:31 crc kubenswrapper[4990]: I1205 01:34:31.455869 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 05 01:34:31 crc kubenswrapper[4990]: I1205 01:34:31.458896 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 05 01:34:31 crc kubenswrapper[4990]: I1205 01:34:31.459388 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 05 01:34:31 crc kubenswrapper[4990]: I1205 01:34:31.637305 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-w4dbr"] Dec 05 01:34:31 crc kubenswrapper[4990]: I1205 01:34:31.638832 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-w4dbr" Dec 05 01:34:31 crc kubenswrapper[4990]: I1205 01:34:31.671904 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-w4dbr"] Dec 05 01:34:31 crc kubenswrapper[4990]: I1205 01:34:31.788686 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-w4dbr\" (UID: \"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-w4dbr" Dec 05 01:34:31 crc kubenswrapper[4990]: I1205 01:34:31.788768 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-w4dbr\" (UID: \"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-w4dbr" Dec 05 01:34:31 crc kubenswrapper[4990]: I1205 01:34:31.788794 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-w4dbr\" (UID: \"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-w4dbr" Dec 05 01:34:31 crc kubenswrapper[4990]: I1205 01:34:31.789805 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6ngd\" (UniqueName: \"kubernetes.io/projected/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-kube-api-access-b6ngd\") pod \"dnsmasq-dns-89c5cd4d5-w4dbr\" (UID: \"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-w4dbr" Dec 05 01:34:31 crc kubenswrapper[4990]: I1205 01:34:31.789876 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-config\") pod \"dnsmasq-dns-89c5cd4d5-w4dbr\" (UID: \"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-w4dbr" Dec 05 01:34:31 crc kubenswrapper[4990]: I1205 01:34:31.789955 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-w4dbr\" (UID: \"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-w4dbr" Dec 05 01:34:31 crc kubenswrapper[4990]: I1205 01:34:31.892033 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6ngd\" (UniqueName: \"kubernetes.io/projected/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-kube-api-access-b6ngd\") pod \"dnsmasq-dns-89c5cd4d5-w4dbr\" (UID: \"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-w4dbr" Dec 05 01:34:31 crc kubenswrapper[4990]: I1205 01:34:31.892074 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-config\") pod \"dnsmasq-dns-89c5cd4d5-w4dbr\" (UID: \"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-w4dbr" Dec 05 01:34:31 crc kubenswrapper[4990]: I1205 01:34:31.892112 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-w4dbr\" (UID: \"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-w4dbr" Dec 05 01:34:31 crc kubenswrapper[4990]: I1205 01:34:31.892229 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-w4dbr\" (UID: \"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-w4dbr" Dec 05 01:34:31 crc kubenswrapper[4990]: I1205 01:34:31.892256 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-w4dbr\" (UID: \"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-w4dbr" Dec 05 01:34:31 crc kubenswrapper[4990]: I1205 01:34:31.892271 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-w4dbr\" (UID: \"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-w4dbr" Dec 05 01:34:31 crc kubenswrapper[4990]: I1205 01:34:31.893082 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-config\") pod \"dnsmasq-dns-89c5cd4d5-w4dbr\" (UID: \"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-w4dbr" Dec 05 01:34:31 crc kubenswrapper[4990]: I1205 01:34:31.893293 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-w4dbr\" (UID: \"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-w4dbr" Dec 05 01:34:31 crc kubenswrapper[4990]: I1205 01:34:31.893301 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-w4dbr\" (UID: \"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-w4dbr" Dec 05 01:34:31 crc kubenswrapper[4990]: I1205 01:34:31.893844 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-w4dbr\" (UID: \"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-w4dbr" Dec 05 01:34:31 crc kubenswrapper[4990]: I1205 01:34:31.894014 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-w4dbr\" (UID: \"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-w4dbr" Dec 05 01:34:31 crc kubenswrapper[4990]: I1205 01:34:31.927021 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6ngd\" (UniqueName: \"kubernetes.io/projected/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-kube-api-access-b6ngd\") pod \"dnsmasq-dns-89c5cd4d5-w4dbr\" (UID: \"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-w4dbr" Dec 05 01:34:31 crc kubenswrapper[4990]: I1205 01:34:31.968116 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-w4dbr" Dec 05 01:34:32 crc kubenswrapper[4990]: I1205 01:34:32.413099 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-w4dbr"] Dec 05 01:34:32 crc kubenswrapper[4990]: W1205 01:34:32.416246 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4e90bf4_ce4f_4b75_8a22_1ebe4c0df94e.slice/crio-31139e0b5d93e2ee80651bd099f04003f013897ab678065d078d7fa4dce9096c WatchSource:0}: Error finding container 31139e0b5d93e2ee80651bd099f04003f013897ab678065d078d7fa4dce9096c: Status 404 returned error can't find the container with id 31139e0b5d93e2ee80651bd099f04003f013897ab678065d078d7fa4dce9096c Dec 05 01:34:33 crc kubenswrapper[4990]: I1205 01:34:33.301383 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:34:33 crc kubenswrapper[4990]: I1205 01:34:33.302445 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a69cb0f1-d7a0-41f5-86e0-29182919b047" containerName="ceilometer-central-agent" containerID="cri-o://10eec2287582c865a5528fe4bd1fc1cdf006a2a1d9802edb4aaa7b2b6ecbb352" gracePeriod=30 Dec 05 01:34:33 crc kubenswrapper[4990]: I1205 01:34:33.302548 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a69cb0f1-d7a0-41f5-86e0-29182919b047" containerName="proxy-httpd" containerID="cri-o://0e1a8495f4dbb1fbfda1371b96ecf67f978a97ca511f8f902b551fb5cf53c709" gracePeriod=30 Dec 05 01:34:33 crc kubenswrapper[4990]: I1205 01:34:33.302593 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a69cb0f1-d7a0-41f5-86e0-29182919b047" containerName="ceilometer-notification-agent" containerID="cri-o://5bf6752ff64cf6a92921e307e7c2fbc62cec51da91fc7002fb308151859faa2a" gracePeriod=30 Dec 05 01:34:33 crc kubenswrapper[4990]: I1205 01:34:33.302554 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a69cb0f1-d7a0-41f5-86e0-29182919b047" containerName="sg-core" containerID="cri-o://0ad15248b91bfd8807d6e4fe9afdeb1651e986c9d31288af1d458c42035b77b5" gracePeriod=30 Dec 05 01:34:33 crc kubenswrapper[4990]: I1205 01:34:33.314765 4990 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="a69cb0f1-d7a0-41f5-86e0-29182919b047" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.190:3000/\": read tcp 10.217.0.2:59064->10.217.0.190:3000: read: connection reset by peer" Dec 05 01:34:33 crc kubenswrapper[4990]: I1205 01:34:33.372244 4990 generic.go:334] "Generic (PLEG): container finished" podID="a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e" containerID="bfcaa5ec5dc3e543e38527fe82f91c0c255a6b6ef7707eb92e012492bd0a3e24" exitCode=0 Dec 05 01:34:33 crc kubenswrapper[4990]: I1205 01:34:33.373468 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-w4dbr" event={"ID":"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e","Type":"ContainerDied","Data":"bfcaa5ec5dc3e543e38527fe82f91c0c255a6b6ef7707eb92e012492bd0a3e24"} Dec 05 01:34:33 crc kubenswrapper[4990]: I1205 01:34:33.373499 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-w4dbr" event={"ID":"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e","Type":"ContainerStarted","Data":"31139e0b5d93e2ee80651bd099f04003f013897ab678065d078d7fa4dce9096c"} Dec 05 01:34:34 crc kubenswrapper[4990]: I1205 01:34:34.011363 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 01:34:34 crc kubenswrapper[4990]: I1205 01:34:34.384367 4990 generic.go:334] "Generic (PLEG): container finished" podID="a69cb0f1-d7a0-41f5-86e0-29182919b047" containerID="0e1a8495f4dbb1fbfda1371b96ecf67f978a97ca511f8f902b551fb5cf53c709" exitCode=0 Dec 05 01:34:34 crc kubenswrapper[4990]: I1205 01:34:34.384657 4990 generic.go:334] "Generic (PLEG): container finished" podID="a69cb0f1-d7a0-41f5-86e0-29182919b047" containerID="0ad15248b91bfd8807d6e4fe9afdeb1651e986c9d31288af1d458c42035b77b5" exitCode=2 Dec 05 01:34:34 crc kubenswrapper[4990]: I1205 01:34:34.384668 4990 generic.go:334] "Generic (PLEG): container finished" podID="a69cb0f1-d7a0-41f5-86e0-29182919b047" containerID="10eec2287582c865a5528fe4bd1fc1cdf006a2a1d9802edb4aaa7b2b6ecbb352" exitCode=0 Dec 05 01:34:34 crc kubenswrapper[4990]: I1205 01:34:34.384449 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a69cb0f1-d7a0-41f5-86e0-29182919b047","Type":"ContainerDied","Data":"0e1a8495f4dbb1fbfda1371b96ecf67f978a97ca511f8f902b551fb5cf53c709"} Dec 05 01:34:34 crc kubenswrapper[4990]: I1205 01:34:34.384736 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a69cb0f1-d7a0-41f5-86e0-29182919b047","Type":"ContainerDied","Data":"0ad15248b91bfd8807d6e4fe9afdeb1651e986c9d31288af1d458c42035b77b5"} Dec 05 01:34:34 crc kubenswrapper[4990]: I1205 01:34:34.384752 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a69cb0f1-d7a0-41f5-86e0-29182919b047","Type":"ContainerDied","Data":"10eec2287582c865a5528fe4bd1fc1cdf006a2a1d9802edb4aaa7b2b6ecbb352"} Dec 05 01:34:34 crc kubenswrapper[4990]: I1205 01:34:34.386207 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-w4dbr" event={"ID":"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e","Type":"ContainerStarted","Data":"8d671cb47e36755ea226221a2086110a02ed86ebb73f448d8147ccab642c1517"} Dec 05 01:34:34 crc kubenswrapper[4990]: I1205 01:34:34.386317 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c" containerName="nova-api-log" containerID="cri-o://d3940336277d60ace5f36b2ee414ce777934d853e511b44a3150f66a80829fcb" gracePeriod=30 Dec 05 01:34:34 crc kubenswrapper[4990]: I1205 01:34:34.386369 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c" containerName="nova-api-api" containerID="cri-o://c171b2e78dfbc422122941ee824aec74124b9142b8d725d65407c7b6ccc17792" gracePeriod=30 Dec 05 01:34:34 crc kubenswrapper[4990]: I1205 01:34:34.414594 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-w4dbr" podStartSLOduration=3.414578098 podStartE2EDuration="3.414578098s" podCreationTimestamp="2025-12-05 01:34:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:34:34.405609884 +0000 UTC m=+1572.781825245" watchObservedRunningTime="2025-12-05 01:34:34.414578098 +0000 UTC m=+1572.790793459" Dec 05 01:34:34 crc kubenswrapper[4990]: I1205 01:34:34.751518 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 05 01:34:34 crc kubenswrapper[4990]: I1205 01:34:34.751617 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 05 01:34:34 crc kubenswrapper[4990]: I1205 01:34:34.771250 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:34:35 crc kubenswrapper[4990]: I1205 01:34:35.397989 4990 generic.go:334] "Generic (PLEG): container finished" podID="c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c" containerID="d3940336277d60ace5f36b2ee414ce777934d853e511b44a3150f66a80829fcb" exitCode=143 Dec 05 01:34:35 crc kubenswrapper[4990]: I1205 01:34:35.398065 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c","Type":"ContainerDied","Data":"d3940336277d60ace5f36b2ee414ce777934d853e511b44a3150f66a80829fcb"} Dec 05 01:34:35 crc kubenswrapper[4990]: I1205 01:34:35.398643 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-w4dbr" Dec 05 01:34:36 crc kubenswrapper[4990]: I1205 01:34:36.408328 4990 generic.go:334] "Generic (PLEG): container finished" podID="a69cb0f1-d7a0-41f5-86e0-29182919b047" containerID="5bf6752ff64cf6a92921e307e7c2fbc62cec51da91fc7002fb308151859faa2a" exitCode=0 Dec 05 01:34:36 crc kubenswrapper[4990]: I1205 01:34:36.409657 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a69cb0f1-d7a0-41f5-86e0-29182919b047","Type":"ContainerDied","Data":"5bf6752ff64cf6a92921e307e7c2fbc62cec51da91fc7002fb308151859faa2a"} Dec 05 01:34:36 crc kubenswrapper[4990]: I1205 01:34:36.641180 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 01:34:36 crc kubenswrapper[4990]: I1205 01:34:36.775745 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a69cb0f1-d7a0-41f5-86e0-29182919b047-scripts\") pod \"a69cb0f1-d7a0-41f5-86e0-29182919b047\" (UID: \"a69cb0f1-d7a0-41f5-86e0-29182919b047\") " Dec 05 01:34:36 crc kubenswrapper[4990]: I1205 01:34:36.775826 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p54bq\" (UniqueName: \"kubernetes.io/projected/a69cb0f1-d7a0-41f5-86e0-29182919b047-kube-api-access-p54bq\") pod \"a69cb0f1-d7a0-41f5-86e0-29182919b047\" (UID: \"a69cb0f1-d7a0-41f5-86e0-29182919b047\") " Dec 05 01:34:36 crc kubenswrapper[4990]: I1205 01:34:36.775874 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a69cb0f1-d7a0-41f5-86e0-29182919b047-config-data\") pod \"a69cb0f1-d7a0-41f5-86e0-29182919b047\" (UID: \"a69cb0f1-d7a0-41f5-86e0-29182919b047\") " Dec 05 01:34:36 crc kubenswrapper[4990]: I1205 01:34:36.775901 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a69cb0f1-d7a0-41f5-86e0-29182919b047-sg-core-conf-yaml\") pod \"a69cb0f1-d7a0-41f5-86e0-29182919b047\" (UID: \"a69cb0f1-d7a0-41f5-86e0-29182919b047\") " Dec 05 01:34:36 crc kubenswrapper[4990]: I1205 01:34:36.775934 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a69cb0f1-d7a0-41f5-86e0-29182919b047-log-httpd\") pod \"a69cb0f1-d7a0-41f5-86e0-29182919b047\" (UID: \"a69cb0f1-d7a0-41f5-86e0-29182919b047\") " Dec 05 01:34:36 crc kubenswrapper[4990]: I1205 01:34:36.775975 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a69cb0f1-d7a0-41f5-86e0-29182919b047-combined-ca-bundle\") pod \"a69cb0f1-d7a0-41f5-86e0-29182919b047\" (UID: \"a69cb0f1-d7a0-41f5-86e0-29182919b047\") " Dec 05 01:34:36 crc kubenswrapper[4990]: I1205 01:34:36.776090 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a69cb0f1-d7a0-41f5-86e0-29182919b047-ceilometer-tls-certs\") pod \"a69cb0f1-d7a0-41f5-86e0-29182919b047\" (UID: \"a69cb0f1-d7a0-41f5-86e0-29182919b047\") " Dec 05 01:34:36 crc kubenswrapper[4990]: I1205 01:34:36.776124 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a69cb0f1-d7a0-41f5-86e0-29182919b047-run-httpd\") pod \"a69cb0f1-d7a0-41f5-86e0-29182919b047\" (UID: \"a69cb0f1-d7a0-41f5-86e0-29182919b047\") " Dec 05 01:34:36 crc kubenswrapper[4990]: I1205 01:34:36.776808 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a69cb0f1-d7a0-41f5-86e0-29182919b047-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a69cb0f1-d7a0-41f5-86e0-29182919b047" (UID: "a69cb0f1-d7a0-41f5-86e0-29182919b047"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:34:36 crc kubenswrapper[4990]: I1205 01:34:36.777145 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a69cb0f1-d7a0-41f5-86e0-29182919b047-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a69cb0f1-d7a0-41f5-86e0-29182919b047" (UID: "a69cb0f1-d7a0-41f5-86e0-29182919b047"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:34:36 crc kubenswrapper[4990]: I1205 01:34:36.796613 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a69cb0f1-d7a0-41f5-86e0-29182919b047-kube-api-access-p54bq" (OuterVolumeSpecName: "kube-api-access-p54bq") pod "a69cb0f1-d7a0-41f5-86e0-29182919b047" (UID: "a69cb0f1-d7a0-41f5-86e0-29182919b047"). InnerVolumeSpecName "kube-api-access-p54bq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:34:36 crc kubenswrapper[4990]: I1205 01:34:36.798701 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a69cb0f1-d7a0-41f5-86e0-29182919b047-scripts" (OuterVolumeSpecName: "scripts") pod "a69cb0f1-d7a0-41f5-86e0-29182919b047" (UID: "a69cb0f1-d7a0-41f5-86e0-29182919b047"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:36 crc kubenswrapper[4990]: I1205 01:34:36.817418 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a69cb0f1-d7a0-41f5-86e0-29182919b047-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a69cb0f1-d7a0-41f5-86e0-29182919b047" (UID: "a69cb0f1-d7a0-41f5-86e0-29182919b047"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:36 crc kubenswrapper[4990]: I1205 01:34:36.840864 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a69cb0f1-d7a0-41f5-86e0-29182919b047-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a69cb0f1-d7a0-41f5-86e0-29182919b047" (UID: "a69cb0f1-d7a0-41f5-86e0-29182919b047"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:36 crc kubenswrapper[4990]: I1205 01:34:36.871100 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a69cb0f1-d7a0-41f5-86e0-29182919b047-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a69cb0f1-d7a0-41f5-86e0-29182919b047" (UID: "a69cb0f1-d7a0-41f5-86e0-29182919b047"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:36 crc kubenswrapper[4990]: I1205 01:34:36.878526 4990 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a69cb0f1-d7a0-41f5-86e0-29182919b047-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:36 crc kubenswrapper[4990]: I1205 01:34:36.878557 4990 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a69cb0f1-d7a0-41f5-86e0-29182919b047-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:36 crc kubenswrapper[4990]: I1205 01:34:36.878565 4990 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a69cb0f1-d7a0-41f5-86e0-29182919b047-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:36 crc kubenswrapper[4990]: I1205 01:34:36.878575 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p54bq\" (UniqueName: \"kubernetes.io/projected/a69cb0f1-d7a0-41f5-86e0-29182919b047-kube-api-access-p54bq\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:36 crc kubenswrapper[4990]: I1205 01:34:36.878585 4990 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a69cb0f1-d7a0-41f5-86e0-29182919b047-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:36 crc kubenswrapper[4990]: I1205 01:34:36.878593 4990 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a69cb0f1-d7a0-41f5-86e0-29182919b047-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:36 crc kubenswrapper[4990]: I1205 01:34:36.878601 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a69cb0f1-d7a0-41f5-86e0-29182919b047-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:36 crc kubenswrapper[4990]: I1205 01:34:36.888887 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a69cb0f1-d7a0-41f5-86e0-29182919b047-config-data" (OuterVolumeSpecName: "config-data") pod "a69cb0f1-d7a0-41f5-86e0-29182919b047" (UID: "a69cb0f1-d7a0-41f5-86e0-29182919b047"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:36 crc kubenswrapper[4990]: I1205 01:34:36.979800 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a69cb0f1-d7a0-41f5-86e0-29182919b047-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.422301 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a69cb0f1-d7a0-41f5-86e0-29182919b047","Type":"ContainerDied","Data":"dec689f15ff18b92a8e76cd6c565e78225153d6ab444cf409c48ac7c92a53883"} Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.422358 4990 scope.go:117] "RemoveContainer" containerID="0e1a8495f4dbb1fbfda1371b96ecf67f978a97ca511f8f902b551fb5cf53c709" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.423361 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.442976 4990 scope.go:117] "RemoveContainer" containerID="0ad15248b91bfd8807d6e4fe9afdeb1651e986c9d31288af1d458c42035b77b5" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.479353 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.479728 4990 scope.go:117] "RemoveContainer" containerID="5bf6752ff64cf6a92921e307e7c2fbc62cec51da91fc7002fb308151859faa2a" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.488108 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.507717 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:34:37 crc kubenswrapper[4990]: E1205 01:34:37.508164 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a69cb0f1-d7a0-41f5-86e0-29182919b047" containerName="proxy-httpd" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.508179 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="a69cb0f1-d7a0-41f5-86e0-29182919b047" containerName="proxy-httpd" Dec 05 01:34:37 crc kubenswrapper[4990]: E1205 01:34:37.508196 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a69cb0f1-d7a0-41f5-86e0-29182919b047" containerName="ceilometer-notification-agent" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.508204 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="a69cb0f1-d7a0-41f5-86e0-29182919b047" containerName="ceilometer-notification-agent" Dec 05 01:34:37 crc kubenswrapper[4990]: E1205 01:34:37.508232 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a69cb0f1-d7a0-41f5-86e0-29182919b047" containerName="ceilometer-central-agent" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.508239 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="a69cb0f1-d7a0-41f5-86e0-29182919b047" containerName="ceilometer-central-agent" Dec 05 01:34:37 crc kubenswrapper[4990]: E1205 01:34:37.508320 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a69cb0f1-d7a0-41f5-86e0-29182919b047" containerName="sg-core" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.508330 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="a69cb0f1-d7a0-41f5-86e0-29182919b047" containerName="sg-core" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.508546 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="a69cb0f1-d7a0-41f5-86e0-29182919b047" containerName="ceilometer-central-agent" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.508574 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="a69cb0f1-d7a0-41f5-86e0-29182919b047" containerName="ceilometer-notification-agent" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.508586 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="a69cb0f1-d7a0-41f5-86e0-29182919b047" containerName="proxy-httpd" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.508595 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="a69cb0f1-d7a0-41f5-86e0-29182919b047" containerName="sg-core" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.510217 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.512944 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.513518 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.513758 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.521229 4990 scope.go:117] "RemoveContainer" containerID="10eec2287582c865a5528fe4bd1fc1cdf006a2a1d9802edb4aaa7b2b6ecbb352" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.528645 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.698229 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vcx8\" (UniqueName: \"kubernetes.io/projected/c8cbf17b-4408-40ea-81bd-c70478cf6095-kube-api-access-7vcx8\") pod \"ceilometer-0\" (UID: \"c8cbf17b-4408-40ea-81bd-c70478cf6095\") " pod="openstack/ceilometer-0" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.698287 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8cbf17b-4408-40ea-81bd-c70478cf6095-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c8cbf17b-4408-40ea-81bd-c70478cf6095\") " pod="openstack/ceilometer-0" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.698334 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8cbf17b-4408-40ea-81bd-c70478cf6095-run-httpd\") pod \"ceilometer-0\" (UID: \"c8cbf17b-4408-40ea-81bd-c70478cf6095\") " pod="openstack/ceilometer-0" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.698360 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8cbf17b-4408-40ea-81bd-c70478cf6095-config-data\") pod \"ceilometer-0\" (UID: \"c8cbf17b-4408-40ea-81bd-c70478cf6095\") " pod="openstack/ceilometer-0" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.698449 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c8cbf17b-4408-40ea-81bd-c70478cf6095-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c8cbf17b-4408-40ea-81bd-c70478cf6095\") " pod="openstack/ceilometer-0" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.698498 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8cbf17b-4408-40ea-81bd-c70478cf6095-scripts\") pod \"ceilometer-0\" (UID: \"c8cbf17b-4408-40ea-81bd-c70478cf6095\") " pod="openstack/ceilometer-0" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.698543 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8cbf17b-4408-40ea-81bd-c70478cf6095-log-httpd\") pod \"ceilometer-0\" (UID: \"c8cbf17b-4408-40ea-81bd-c70478cf6095\") " pod="openstack/ceilometer-0" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.698570 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cbf17b-4408-40ea-81bd-c70478cf6095-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c8cbf17b-4408-40ea-81bd-c70478cf6095\") " pod="openstack/ceilometer-0" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.800277 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cbf17b-4408-40ea-81bd-c70478cf6095-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c8cbf17b-4408-40ea-81bd-c70478cf6095\") " pod="openstack/ceilometer-0" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.800350 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vcx8\" (UniqueName: \"kubernetes.io/projected/c8cbf17b-4408-40ea-81bd-c70478cf6095-kube-api-access-7vcx8\") pod \"ceilometer-0\" (UID: \"c8cbf17b-4408-40ea-81bd-c70478cf6095\") " pod="openstack/ceilometer-0" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.800385 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8cbf17b-4408-40ea-81bd-c70478cf6095-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c8cbf17b-4408-40ea-81bd-c70478cf6095\") " pod="openstack/ceilometer-0" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.800417 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8cbf17b-4408-40ea-81bd-c70478cf6095-run-httpd\") pod \"ceilometer-0\" (UID: \"c8cbf17b-4408-40ea-81bd-c70478cf6095\") " pod="openstack/ceilometer-0" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.800446 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8cbf17b-4408-40ea-81bd-c70478cf6095-config-data\") pod \"ceilometer-0\" (UID: \"c8cbf17b-4408-40ea-81bd-c70478cf6095\") " pod="openstack/ceilometer-0" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.800510 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c8cbf17b-4408-40ea-81bd-c70478cf6095-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c8cbf17b-4408-40ea-81bd-c70478cf6095\") " pod="openstack/ceilometer-0" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.800541 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8cbf17b-4408-40ea-81bd-c70478cf6095-scripts\") pod \"ceilometer-0\" (UID: \"c8cbf17b-4408-40ea-81bd-c70478cf6095\") " pod="openstack/ceilometer-0" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.800574 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8cbf17b-4408-40ea-81bd-c70478cf6095-log-httpd\") pod \"ceilometer-0\" (UID: \"c8cbf17b-4408-40ea-81bd-c70478cf6095\") " pod="openstack/ceilometer-0" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.801132 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8cbf17b-4408-40ea-81bd-c70478cf6095-log-httpd\") pod \"ceilometer-0\" (UID: \"c8cbf17b-4408-40ea-81bd-c70478cf6095\") " pod="openstack/ceilometer-0" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.801355 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8cbf17b-4408-40ea-81bd-c70478cf6095-run-httpd\") pod \"ceilometer-0\" (UID: \"c8cbf17b-4408-40ea-81bd-c70478cf6095\") " pod="openstack/ceilometer-0" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.806361 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cbf17b-4408-40ea-81bd-c70478cf6095-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c8cbf17b-4408-40ea-81bd-c70478cf6095\") " pod="openstack/ceilometer-0" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.806416 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8cbf17b-4408-40ea-81bd-c70478cf6095-scripts\") pod \"ceilometer-0\" (UID: \"c8cbf17b-4408-40ea-81bd-c70478cf6095\") " pod="openstack/ceilometer-0" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.806887 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c8cbf17b-4408-40ea-81bd-c70478cf6095-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c8cbf17b-4408-40ea-81bd-c70478cf6095\") " pod="openstack/ceilometer-0" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.807265 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8cbf17b-4408-40ea-81bd-c70478cf6095-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c8cbf17b-4408-40ea-81bd-c70478cf6095\") " pod="openstack/ceilometer-0" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.808921 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8cbf17b-4408-40ea-81bd-c70478cf6095-config-data\") pod \"ceilometer-0\" (UID: \"c8cbf17b-4408-40ea-81bd-c70478cf6095\") " pod="openstack/ceilometer-0" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.816047 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vcx8\" (UniqueName: \"kubernetes.io/projected/c8cbf17b-4408-40ea-81bd-c70478cf6095-kube-api-access-7vcx8\") pod \"ceilometer-0\" (UID: \"c8cbf17b-4408-40ea-81bd-c70478cf6095\") " pod="openstack/ceilometer-0" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.840878 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 01:34:37 crc kubenswrapper[4990]: I1205 01:34:37.962464 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a69cb0f1-d7a0-41f5-86e0-29182919b047" path="/var/lib/kubelet/pods/a69cb0f1-d7a0-41f5-86e0-29182919b047/volumes" Dec 05 01:34:38 crc kubenswrapper[4990]: I1205 01:34:38.336693 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:34:38 crc kubenswrapper[4990]: W1205 01:34:38.351612 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8cbf17b_4408_40ea_81bd_c70478cf6095.slice/crio-4dba06b72fa8e7ff768cc233f523c342e0a6f9862022e329cf2dbd8bf4a341ea WatchSource:0}: Error finding container 4dba06b72fa8e7ff768cc233f523c342e0a6f9862022e329cf2dbd8bf4a341ea: Status 404 returned error can't find the container with id 4dba06b72fa8e7ff768cc233f523c342e0a6f9862022e329cf2dbd8bf4a341ea Dec 05 01:34:38 crc kubenswrapper[4990]: I1205 01:34:38.434255 4990 generic.go:334] "Generic (PLEG): container finished" podID="c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c" containerID="c171b2e78dfbc422122941ee824aec74124b9142b8d725d65407c7b6ccc17792" exitCode=0 Dec 05 01:34:38 crc kubenswrapper[4990]: I1205 01:34:38.434345 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c","Type":"ContainerDied","Data":"c171b2e78dfbc422122941ee824aec74124b9142b8d725d65407c7b6ccc17792"} Dec 05 01:34:38 crc kubenswrapper[4990]: I1205 01:34:38.436423 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8cbf17b-4408-40ea-81bd-c70478cf6095","Type":"ContainerStarted","Data":"4dba06b72fa8e7ff768cc233f523c342e0a6f9862022e329cf2dbd8bf4a341ea"} Dec 05 01:34:38 crc kubenswrapper[4990]: I1205 01:34:38.530920 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 01:34:38 crc kubenswrapper[4990]: I1205 01:34:38.547432 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c-combined-ca-bundle\") pod \"c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c\" (UID: \"c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c\") " Dec 05 01:34:38 crc kubenswrapper[4990]: I1205 01:34:38.547623 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c-config-data\") pod \"c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c\" (UID: \"c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c\") " Dec 05 01:34:38 crc kubenswrapper[4990]: I1205 01:34:38.547680 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c-logs\") pod \"c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c\" (UID: \"c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c\") " Dec 05 01:34:38 crc kubenswrapper[4990]: I1205 01:34:38.547827 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2n7zt\" (UniqueName: \"kubernetes.io/projected/c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c-kube-api-access-2n7zt\") pod \"c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c\" (UID: \"c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c\") " Dec 05 01:34:38 crc kubenswrapper[4990]: I1205 01:34:38.548102 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c-logs" (OuterVolumeSpecName: "logs") pod "c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c" (UID: "c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:34:38 crc kubenswrapper[4990]: I1205 01:34:38.548321 4990 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c-logs\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:38 crc kubenswrapper[4990]: I1205 01:34:38.563815 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c-kube-api-access-2n7zt" (OuterVolumeSpecName: "kube-api-access-2n7zt") pod "c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c" (UID: "c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c"). InnerVolumeSpecName "kube-api-access-2n7zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:34:38 crc kubenswrapper[4990]: I1205 01:34:38.590093 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c" (UID: "c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:38 crc kubenswrapper[4990]: I1205 01:34:38.599864 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c-config-data" (OuterVolumeSpecName: "config-data") pod "c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c" (UID: "c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:38 crc kubenswrapper[4990]: I1205 01:34:38.649789 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:38 crc kubenswrapper[4990]: I1205 01:34:38.650128 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:38 crc kubenswrapper[4990]: I1205 01:34:38.650140 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2n7zt\" (UniqueName: \"kubernetes.io/projected/c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c-kube-api-access-2n7zt\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.458524 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c","Type":"ContainerDied","Data":"f9d84b5bb0f88b906be8a6ac31489f666e1a9007e1d6a17c1848d8c11e1faa50"} Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.460241 4990 scope.go:117] "RemoveContainer" containerID="c171b2e78dfbc422122941ee824aec74124b9142b8d725d65407c7b6ccc17792" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.460168 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.472991 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8cbf17b-4408-40ea-81bd-c70478cf6095","Type":"ContainerStarted","Data":"31ce6ea891092f920fafd58685b5970d0c8960a1faf1c70db62854e2178e153a"} Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.527799 4990 scope.go:117] "RemoveContainer" containerID="d3940336277d60ace5f36b2ee414ce777934d853e511b44a3150f66a80829fcb" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.563616 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.588604 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.601248 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 05 01:34:39 crc kubenswrapper[4990]: E1205 01:34:39.601786 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c" containerName="nova-api-log" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.601805 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c" containerName="nova-api-log" Dec 05 01:34:39 crc kubenswrapper[4990]: E1205 01:34:39.601825 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c" containerName="nova-api-api" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.601832 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c" containerName="nova-api-api" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.602029 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c" containerName="nova-api-log" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.602056 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c" containerName="nova-api-api" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.603034 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.608144 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.608390 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.608613 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.611159 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.666295 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e471e6a-7baf-4f1d-b121-0012f6e036b5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9e471e6a-7baf-4f1d-b121-0012f6e036b5\") " pod="openstack/nova-api-0" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.666344 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e471e6a-7baf-4f1d-b121-0012f6e036b5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9e471e6a-7baf-4f1d-b121-0012f6e036b5\") " pod="openstack/nova-api-0" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.666362 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqlh4\" (UniqueName: \"kubernetes.io/projected/9e471e6a-7baf-4f1d-b121-0012f6e036b5-kube-api-access-lqlh4\") pod \"nova-api-0\" (UID: \"9e471e6a-7baf-4f1d-b121-0012f6e036b5\") " pod="openstack/nova-api-0" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.666563 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e471e6a-7baf-4f1d-b121-0012f6e036b5-config-data\") pod \"nova-api-0\" (UID: \"9e471e6a-7baf-4f1d-b121-0012f6e036b5\") " pod="openstack/nova-api-0" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.666592 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e471e6a-7baf-4f1d-b121-0012f6e036b5-logs\") pod \"nova-api-0\" (UID: \"9e471e6a-7baf-4f1d-b121-0012f6e036b5\") " pod="openstack/nova-api-0" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.666625 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e471e6a-7baf-4f1d-b121-0012f6e036b5-public-tls-certs\") pod \"nova-api-0\" (UID: \"9e471e6a-7baf-4f1d-b121-0012f6e036b5\") " pod="openstack/nova-api-0" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.751644 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.751684 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.768344 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e471e6a-7baf-4f1d-b121-0012f6e036b5-config-data\") pod \"nova-api-0\" (UID: \"9e471e6a-7baf-4f1d-b121-0012f6e036b5\") " pod="openstack/nova-api-0" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.768389 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e471e6a-7baf-4f1d-b121-0012f6e036b5-logs\") pod \"nova-api-0\" (UID: \"9e471e6a-7baf-4f1d-b121-0012f6e036b5\") " pod="openstack/nova-api-0" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.768423 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e471e6a-7baf-4f1d-b121-0012f6e036b5-public-tls-certs\") pod \"nova-api-0\" (UID: \"9e471e6a-7baf-4f1d-b121-0012f6e036b5\") " pod="openstack/nova-api-0" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.768548 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e471e6a-7baf-4f1d-b121-0012f6e036b5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9e471e6a-7baf-4f1d-b121-0012f6e036b5\") " pod="openstack/nova-api-0" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.768587 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e471e6a-7baf-4f1d-b121-0012f6e036b5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9e471e6a-7baf-4f1d-b121-0012f6e036b5\") " pod="openstack/nova-api-0" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.768608 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqlh4\" (UniqueName: \"kubernetes.io/projected/9e471e6a-7baf-4f1d-b121-0012f6e036b5-kube-api-access-lqlh4\") pod \"nova-api-0\" (UID: \"9e471e6a-7baf-4f1d-b121-0012f6e036b5\") " pod="openstack/nova-api-0" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.771542 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e471e6a-7baf-4f1d-b121-0012f6e036b5-logs\") pod \"nova-api-0\" (UID: \"9e471e6a-7baf-4f1d-b121-0012f6e036b5\") " pod="openstack/nova-api-0" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.771674 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.775132 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e471e6a-7baf-4f1d-b121-0012f6e036b5-config-data\") pod \"nova-api-0\" (UID: \"9e471e6a-7baf-4f1d-b121-0012f6e036b5\") " pod="openstack/nova-api-0" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.775654 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e471e6a-7baf-4f1d-b121-0012f6e036b5-public-tls-certs\") pod \"nova-api-0\" (UID: \"9e471e6a-7baf-4f1d-b121-0012f6e036b5\") " pod="openstack/nova-api-0" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.775993 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e471e6a-7baf-4f1d-b121-0012f6e036b5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9e471e6a-7baf-4f1d-b121-0012f6e036b5\") " pod="openstack/nova-api-0" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.785623 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e471e6a-7baf-4f1d-b121-0012f6e036b5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9e471e6a-7baf-4f1d-b121-0012f6e036b5\") " pod="openstack/nova-api-0" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.794524 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqlh4\" (UniqueName: \"kubernetes.io/projected/9e471e6a-7baf-4f1d-b121-0012f6e036b5-kube-api-access-lqlh4\") pod \"nova-api-0\" (UID: \"9e471e6a-7baf-4f1d-b121-0012f6e036b5\") " pod="openstack/nova-api-0" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.818083 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.936878 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 01:34:39 crc kubenswrapper[4990]: I1205 01:34:39.940912 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c" path="/var/lib/kubelet/pods/c75d6e07-58ff-4d23-bf8f-af0fbd02dd1c/volumes" Dec 05 01:34:40 crc kubenswrapper[4990]: I1205 01:34:40.410770 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 01:34:40 crc kubenswrapper[4990]: W1205 01:34:40.412027 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e471e6a_7baf_4f1d_b121_0012f6e036b5.slice/crio-153276cb7417de47518a4e99cdf86ad3729dfcac69d1a5345a0f2eab59bd842d WatchSource:0}: Error finding container 153276cb7417de47518a4e99cdf86ad3729dfcac69d1a5345a0f2eab59bd842d: Status 404 returned error can't find the container with id 153276cb7417de47518a4e99cdf86ad3729dfcac69d1a5345a0f2eab59bd842d Dec 05 01:34:40 crc kubenswrapper[4990]: I1205 01:34:40.487753 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8cbf17b-4408-40ea-81bd-c70478cf6095","Type":"ContainerStarted","Data":"72b6662721b483f07b892ef1907c45dc83be4d09b4ac2d3ef321bc8da7ab9d10"} Dec 05 01:34:40 crc kubenswrapper[4990]: I1205 01:34:40.487798 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8cbf17b-4408-40ea-81bd-c70478cf6095","Type":"ContainerStarted","Data":"caed40083fa597fe943a30fe27bc0e925ac084161f972ab054dae2a9368983ca"} Dec 05 01:34:40 crc kubenswrapper[4990]: I1205 01:34:40.492593 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e471e6a-7baf-4f1d-b121-0012f6e036b5","Type":"ContainerStarted","Data":"153276cb7417de47518a4e99cdf86ad3729dfcac69d1a5345a0f2eab59bd842d"} Dec 05 01:34:40 crc kubenswrapper[4990]: I1205 01:34:40.520890 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:34:40 crc kubenswrapper[4990]: I1205 01:34:40.695986 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-vbpct"] Dec 05 01:34:40 crc kubenswrapper[4990]: I1205 01:34:40.697298 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vbpct" Dec 05 01:34:40 crc kubenswrapper[4990]: I1205 01:34:40.699217 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 05 01:34:40 crc kubenswrapper[4990]: I1205 01:34:40.700531 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 05 01:34:40 crc kubenswrapper[4990]: I1205 01:34:40.708448 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-vbpct"] Dec 05 01:34:40 crc kubenswrapper[4990]: I1205 01:34:40.767656 4990 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f6df8299-697f-4c4e-a0f6-821ff4261eb2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.191:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 01:34:40 crc kubenswrapper[4990]: I1205 01:34:40.767657 4990 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f6df8299-697f-4c4e-a0f6-821ff4261eb2" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.191:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 01:34:40 crc kubenswrapper[4990]: I1205 01:34:40.885646 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4839e707-1591-41e8-8bc3-23024188eb47-config-data\") pod \"nova-cell1-cell-mapping-vbpct\" (UID: \"4839e707-1591-41e8-8bc3-23024188eb47\") " pod="openstack/nova-cell1-cell-mapping-vbpct" Dec 05 01:34:40 crc kubenswrapper[4990]: I1205 01:34:40.885698 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4839e707-1591-41e8-8bc3-23024188eb47-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vbpct\" (UID: \"4839e707-1591-41e8-8bc3-23024188eb47\") " pod="openstack/nova-cell1-cell-mapping-vbpct" Dec 05 01:34:40 crc kubenswrapper[4990]: I1205 01:34:40.885757 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4839e707-1591-41e8-8bc3-23024188eb47-scripts\") pod \"nova-cell1-cell-mapping-vbpct\" (UID: \"4839e707-1591-41e8-8bc3-23024188eb47\") " pod="openstack/nova-cell1-cell-mapping-vbpct" Dec 05 01:34:40 crc kubenswrapper[4990]: I1205 01:34:40.885808 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf57h\" (UniqueName: \"kubernetes.io/projected/4839e707-1591-41e8-8bc3-23024188eb47-kube-api-access-zf57h\") pod \"nova-cell1-cell-mapping-vbpct\" (UID: \"4839e707-1591-41e8-8bc3-23024188eb47\") " pod="openstack/nova-cell1-cell-mapping-vbpct" Dec 05 01:34:40 crc kubenswrapper[4990]: I1205 01:34:40.987659 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4839e707-1591-41e8-8bc3-23024188eb47-config-data\") pod \"nova-cell1-cell-mapping-vbpct\" (UID: \"4839e707-1591-41e8-8bc3-23024188eb47\") " pod="openstack/nova-cell1-cell-mapping-vbpct" Dec 05 01:34:40 crc kubenswrapper[4990]: I1205 01:34:40.987714 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4839e707-1591-41e8-8bc3-23024188eb47-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vbpct\" (UID: \"4839e707-1591-41e8-8bc3-23024188eb47\") " pod="openstack/nova-cell1-cell-mapping-vbpct" Dec 05 01:34:40 crc kubenswrapper[4990]: I1205 01:34:40.987766 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4839e707-1591-41e8-8bc3-23024188eb47-scripts\") pod \"nova-cell1-cell-mapping-vbpct\" (UID: \"4839e707-1591-41e8-8bc3-23024188eb47\") " pod="openstack/nova-cell1-cell-mapping-vbpct" Dec 05 01:34:40 crc kubenswrapper[4990]: I1205 01:34:40.988037 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf57h\" (UniqueName: \"kubernetes.io/projected/4839e707-1591-41e8-8bc3-23024188eb47-kube-api-access-zf57h\") pod \"nova-cell1-cell-mapping-vbpct\" (UID: \"4839e707-1591-41e8-8bc3-23024188eb47\") " pod="openstack/nova-cell1-cell-mapping-vbpct" Dec 05 01:34:40 crc kubenswrapper[4990]: I1205 01:34:40.995945 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4839e707-1591-41e8-8bc3-23024188eb47-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vbpct\" (UID: \"4839e707-1591-41e8-8bc3-23024188eb47\") " pod="openstack/nova-cell1-cell-mapping-vbpct" Dec 05 01:34:40 crc kubenswrapper[4990]: I1205 01:34:40.996447 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4839e707-1591-41e8-8bc3-23024188eb47-config-data\") pod \"nova-cell1-cell-mapping-vbpct\" (UID: \"4839e707-1591-41e8-8bc3-23024188eb47\") " pod="openstack/nova-cell1-cell-mapping-vbpct" Dec 05 01:34:40 crc kubenswrapper[4990]: I1205 01:34:40.997193 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4839e707-1591-41e8-8bc3-23024188eb47-scripts\") pod \"nova-cell1-cell-mapping-vbpct\" (UID: \"4839e707-1591-41e8-8bc3-23024188eb47\") " pod="openstack/nova-cell1-cell-mapping-vbpct" Dec 05 01:34:41 crc kubenswrapper[4990]: I1205 01:34:41.005407 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf57h\" (UniqueName: \"kubernetes.io/projected/4839e707-1591-41e8-8bc3-23024188eb47-kube-api-access-zf57h\") pod \"nova-cell1-cell-mapping-vbpct\" (UID: \"4839e707-1591-41e8-8bc3-23024188eb47\") " pod="openstack/nova-cell1-cell-mapping-vbpct" Dec 05 01:34:41 crc kubenswrapper[4990]: I1205 01:34:41.091732 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vbpct" Dec 05 01:34:41 crc kubenswrapper[4990]: I1205 01:34:41.506731 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e471e6a-7baf-4f1d-b121-0012f6e036b5","Type":"ContainerStarted","Data":"d1f2fd303f5c18ea9d8393e19b59d850ee4eb4e8c7471177f767d3429ae8c096"} Dec 05 01:34:41 crc kubenswrapper[4990]: I1205 01:34:41.507074 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e471e6a-7baf-4f1d-b121-0012f6e036b5","Type":"ContainerStarted","Data":"4936caf67699dbe33adeeca029dcd647c3040214fe2e779e3631877562e4c24e"} Dec 05 01:34:41 crc kubenswrapper[4990]: I1205 01:34:41.540405 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.540383698 podStartE2EDuration="2.540383698s" podCreationTimestamp="2025-12-05 01:34:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:34:41.522773789 +0000 UTC m=+1579.898989170" watchObservedRunningTime="2025-12-05 01:34:41.540383698 +0000 UTC m=+1579.916599059" Dec 05 01:34:41 crc kubenswrapper[4990]: I1205 01:34:41.561904 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-vbpct"] Dec 05 01:34:41 crc kubenswrapper[4990]: I1205 01:34:41.970668 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-w4dbr" Dec 05 01:34:42 crc kubenswrapper[4990]: I1205 01:34:42.042801 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-dvjzl"] Dec 05 01:34:42 crc kubenswrapper[4990]: I1205 01:34:42.043732 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-dvjzl" podUID="b27ca4f7-1e58-4994-bc4f-1a751f08e628" containerName="dnsmasq-dns" containerID="cri-o://355f508a6ba74b579c0a1d7879d2f885a7d91996f956db831c47eaff9fdff134" gracePeriod=10 Dec 05 01:34:42 crc kubenswrapper[4990]: I1205 01:34:42.516047 4990 generic.go:334] "Generic (PLEG): container finished" podID="b27ca4f7-1e58-4994-bc4f-1a751f08e628" containerID="355f508a6ba74b579c0a1d7879d2f885a7d91996f956db831c47eaff9fdff134" exitCode=0 Dec 05 01:34:42 crc kubenswrapper[4990]: I1205 01:34:42.516124 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-dvjzl" event={"ID":"b27ca4f7-1e58-4994-bc4f-1a751f08e628","Type":"ContainerDied","Data":"355f508a6ba74b579c0a1d7879d2f885a7d91996f956db831c47eaff9fdff134"} Dec 05 01:34:42 crc kubenswrapper[4990]: I1205 01:34:42.516156 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-dvjzl" event={"ID":"b27ca4f7-1e58-4994-bc4f-1a751f08e628","Type":"ContainerDied","Data":"c4818ee5fcdd926f1ff8d706cf06613d924ef7a30dabe645dbf9b4547e091785"} Dec 05 01:34:42 crc kubenswrapper[4990]: I1205 01:34:42.516170 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4818ee5fcdd926f1ff8d706cf06613d924ef7a30dabe645dbf9b4547e091785" Dec 05 01:34:42 crc kubenswrapper[4990]: I1205 01:34:42.520817 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vbpct" event={"ID":"4839e707-1591-41e8-8bc3-23024188eb47","Type":"ContainerStarted","Data":"1da14eb63196d0ca1b50f0e559638781676ddc745754bd656a71b4fcad75d292"} Dec 05 01:34:42 crc kubenswrapper[4990]: I1205 01:34:42.520856 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vbpct" event={"ID":"4839e707-1591-41e8-8bc3-23024188eb47","Type":"ContainerStarted","Data":"07b80ca2816f1e31488d9d5c507d6c3aa16cb4081d0e7a0fb1f4fba4cea9364e"} Dec 05 01:34:42 crc kubenswrapper[4990]: I1205 01:34:42.537963 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-vbpct" podStartSLOduration=2.537943304 podStartE2EDuration="2.537943304s" podCreationTimestamp="2025-12-05 01:34:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:34:42.532255682 +0000 UTC m=+1580.908471043" watchObservedRunningTime="2025-12-05 01:34:42.537943304 +0000 UTC m=+1580.914158675" Dec 05 01:34:42 crc kubenswrapper[4990]: I1205 01:34:42.573070 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-dvjzl" Dec 05 01:34:42 crc kubenswrapper[4990]: I1205 01:34:42.722868 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b27ca4f7-1e58-4994-bc4f-1a751f08e628-config\") pod \"b27ca4f7-1e58-4994-bc4f-1a751f08e628\" (UID: \"b27ca4f7-1e58-4994-bc4f-1a751f08e628\") " Dec 05 01:34:42 crc kubenswrapper[4990]: I1205 01:34:42.723323 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b27ca4f7-1e58-4994-bc4f-1a751f08e628-dns-swift-storage-0\") pod \"b27ca4f7-1e58-4994-bc4f-1a751f08e628\" (UID: \"b27ca4f7-1e58-4994-bc4f-1a751f08e628\") " Dec 05 01:34:42 crc kubenswrapper[4990]: I1205 01:34:42.723553 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjdvt\" (UniqueName: \"kubernetes.io/projected/b27ca4f7-1e58-4994-bc4f-1a751f08e628-kube-api-access-mjdvt\") pod \"b27ca4f7-1e58-4994-bc4f-1a751f08e628\" (UID: \"b27ca4f7-1e58-4994-bc4f-1a751f08e628\") " Dec 05 01:34:42 crc kubenswrapper[4990]: I1205 01:34:42.723629 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b27ca4f7-1e58-4994-bc4f-1a751f08e628-ovsdbserver-nb\") pod \"b27ca4f7-1e58-4994-bc4f-1a751f08e628\" (UID: \"b27ca4f7-1e58-4994-bc4f-1a751f08e628\") " Dec 05 01:34:42 crc kubenswrapper[4990]: I1205 01:34:42.723769 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b27ca4f7-1e58-4994-bc4f-1a751f08e628-ovsdbserver-sb\") pod \"b27ca4f7-1e58-4994-bc4f-1a751f08e628\" (UID: \"b27ca4f7-1e58-4994-bc4f-1a751f08e628\") " Dec 05 01:34:42 crc kubenswrapper[4990]: I1205 01:34:42.723959 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b27ca4f7-1e58-4994-bc4f-1a751f08e628-dns-svc\") pod \"b27ca4f7-1e58-4994-bc4f-1a751f08e628\" (UID: \"b27ca4f7-1e58-4994-bc4f-1a751f08e628\") " Dec 05 01:34:42 crc kubenswrapper[4990]: I1205 01:34:42.730755 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b27ca4f7-1e58-4994-bc4f-1a751f08e628-kube-api-access-mjdvt" (OuterVolumeSpecName: "kube-api-access-mjdvt") pod "b27ca4f7-1e58-4994-bc4f-1a751f08e628" (UID: "b27ca4f7-1e58-4994-bc4f-1a751f08e628"). InnerVolumeSpecName "kube-api-access-mjdvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:34:42 crc kubenswrapper[4990]: I1205 01:34:42.772714 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b27ca4f7-1e58-4994-bc4f-1a751f08e628-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b27ca4f7-1e58-4994-bc4f-1a751f08e628" (UID: "b27ca4f7-1e58-4994-bc4f-1a751f08e628"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:34:42 crc kubenswrapper[4990]: I1205 01:34:42.781071 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b27ca4f7-1e58-4994-bc4f-1a751f08e628-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b27ca4f7-1e58-4994-bc4f-1a751f08e628" (UID: "b27ca4f7-1e58-4994-bc4f-1a751f08e628"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:34:42 crc kubenswrapper[4990]: I1205 01:34:42.781439 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b27ca4f7-1e58-4994-bc4f-1a751f08e628-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b27ca4f7-1e58-4994-bc4f-1a751f08e628" (UID: "b27ca4f7-1e58-4994-bc4f-1a751f08e628"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:34:42 crc kubenswrapper[4990]: I1205 01:34:42.783995 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b27ca4f7-1e58-4994-bc4f-1a751f08e628-config" (OuterVolumeSpecName: "config") pod "b27ca4f7-1e58-4994-bc4f-1a751f08e628" (UID: "b27ca4f7-1e58-4994-bc4f-1a751f08e628"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:34:42 crc kubenswrapper[4990]: I1205 01:34:42.784311 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b27ca4f7-1e58-4994-bc4f-1a751f08e628-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b27ca4f7-1e58-4994-bc4f-1a751f08e628" (UID: "b27ca4f7-1e58-4994-bc4f-1a751f08e628"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:34:42 crc kubenswrapper[4990]: I1205 01:34:42.826390 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjdvt\" (UniqueName: \"kubernetes.io/projected/b27ca4f7-1e58-4994-bc4f-1a751f08e628-kube-api-access-mjdvt\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:42 crc kubenswrapper[4990]: I1205 01:34:42.826424 4990 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b27ca4f7-1e58-4994-bc4f-1a751f08e628-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:42 crc kubenswrapper[4990]: I1205 01:34:42.826432 4990 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b27ca4f7-1e58-4994-bc4f-1a751f08e628-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:42 crc kubenswrapper[4990]: I1205 01:34:42.826444 4990 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b27ca4f7-1e58-4994-bc4f-1a751f08e628-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:42 crc kubenswrapper[4990]: I1205 01:34:42.826457 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b27ca4f7-1e58-4994-bc4f-1a751f08e628-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:42 crc kubenswrapper[4990]: I1205 01:34:42.826465 4990 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b27ca4f7-1e58-4994-bc4f-1a751f08e628-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:43 crc kubenswrapper[4990]: I1205 01:34:43.524530 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-dvjzl" Dec 05 01:34:43 crc kubenswrapper[4990]: I1205 01:34:43.561952 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-dvjzl"] Dec 05 01:34:43 crc kubenswrapper[4990]: I1205 01:34:43.572669 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-dvjzl"] Dec 05 01:34:43 crc kubenswrapper[4990]: I1205 01:34:43.940627 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b27ca4f7-1e58-4994-bc4f-1a751f08e628" path="/var/lib/kubelet/pods/b27ca4f7-1e58-4994-bc4f-1a751f08e628/volumes" Dec 05 01:34:45 crc kubenswrapper[4990]: I1205 01:34:45.548889 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8cbf17b-4408-40ea-81bd-c70478cf6095","Type":"ContainerStarted","Data":"60607d382cd8bda26a5778bed70f82be69af7e7f24f195984c6f642727d62e2c"} Dec 05 01:34:45 crc kubenswrapper[4990]: I1205 01:34:45.549576 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 01:34:45 crc kubenswrapper[4990]: I1205 01:34:45.585800 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.369596114 podStartE2EDuration="8.585778504s" podCreationTimestamp="2025-12-05 01:34:37 +0000 UTC" firstStartedPulling="2025-12-05 01:34:38.353629486 +0000 UTC m=+1576.729844847" lastFinishedPulling="2025-12-05 01:34:44.569811866 +0000 UTC m=+1582.946027237" observedRunningTime="2025-12-05 01:34:45.574547755 +0000 UTC m=+1583.950763126" watchObservedRunningTime="2025-12-05 01:34:45.585778504 +0000 UTC m=+1583.961993875" Dec 05 01:34:46 crc kubenswrapper[4990]: I1205 01:34:46.563921 4990 generic.go:334] "Generic (PLEG): container finished" podID="4839e707-1591-41e8-8bc3-23024188eb47" containerID="1da14eb63196d0ca1b50f0e559638781676ddc745754bd656a71b4fcad75d292" exitCode=0 Dec 05 01:34:46 crc kubenswrapper[4990]: I1205 01:34:46.564163 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vbpct" event={"ID":"4839e707-1591-41e8-8bc3-23024188eb47","Type":"ContainerDied","Data":"1da14eb63196d0ca1b50f0e559638781676ddc745754bd656a71b4fcad75d292"} Dec 05 01:34:47 crc kubenswrapper[4990]: I1205 01:34:47.990554 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vbpct" Dec 05 01:34:48 crc kubenswrapper[4990]: I1205 01:34:48.142359 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4839e707-1591-41e8-8bc3-23024188eb47-scripts\") pod \"4839e707-1591-41e8-8bc3-23024188eb47\" (UID: \"4839e707-1591-41e8-8bc3-23024188eb47\") " Dec 05 01:34:48 crc kubenswrapper[4990]: I1205 01:34:48.142577 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf57h\" (UniqueName: \"kubernetes.io/projected/4839e707-1591-41e8-8bc3-23024188eb47-kube-api-access-zf57h\") pod \"4839e707-1591-41e8-8bc3-23024188eb47\" (UID: \"4839e707-1591-41e8-8bc3-23024188eb47\") " Dec 05 01:34:48 crc kubenswrapper[4990]: I1205 01:34:48.142625 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4839e707-1591-41e8-8bc3-23024188eb47-combined-ca-bundle\") pod \"4839e707-1591-41e8-8bc3-23024188eb47\" (UID: \"4839e707-1591-41e8-8bc3-23024188eb47\") " Dec 05 01:34:48 crc kubenswrapper[4990]: I1205 01:34:48.142732 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4839e707-1591-41e8-8bc3-23024188eb47-config-data\") pod \"4839e707-1591-41e8-8bc3-23024188eb47\" (UID: \"4839e707-1591-41e8-8bc3-23024188eb47\") " Dec 05 01:34:48 crc kubenswrapper[4990]: I1205 01:34:48.149453 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4839e707-1591-41e8-8bc3-23024188eb47-scripts" (OuterVolumeSpecName: "scripts") pod "4839e707-1591-41e8-8bc3-23024188eb47" (UID: "4839e707-1591-41e8-8bc3-23024188eb47"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:48 crc kubenswrapper[4990]: I1205 01:34:48.150267 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4839e707-1591-41e8-8bc3-23024188eb47-kube-api-access-zf57h" (OuterVolumeSpecName: "kube-api-access-zf57h") pod "4839e707-1591-41e8-8bc3-23024188eb47" (UID: "4839e707-1591-41e8-8bc3-23024188eb47"). InnerVolumeSpecName "kube-api-access-zf57h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:34:48 crc kubenswrapper[4990]: I1205 01:34:48.177150 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4839e707-1591-41e8-8bc3-23024188eb47-config-data" (OuterVolumeSpecName: "config-data") pod "4839e707-1591-41e8-8bc3-23024188eb47" (UID: "4839e707-1591-41e8-8bc3-23024188eb47"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:48 crc kubenswrapper[4990]: I1205 01:34:48.191552 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4839e707-1591-41e8-8bc3-23024188eb47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4839e707-1591-41e8-8bc3-23024188eb47" (UID: "4839e707-1591-41e8-8bc3-23024188eb47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:48 crc kubenswrapper[4990]: I1205 01:34:48.245058 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf57h\" (UniqueName: \"kubernetes.io/projected/4839e707-1591-41e8-8bc3-23024188eb47-kube-api-access-zf57h\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:48 crc kubenswrapper[4990]: I1205 01:34:48.245088 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4839e707-1591-41e8-8bc3-23024188eb47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:48 crc kubenswrapper[4990]: I1205 01:34:48.245096 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4839e707-1591-41e8-8bc3-23024188eb47-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:48 crc kubenswrapper[4990]: I1205 01:34:48.245106 4990 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4839e707-1591-41e8-8bc3-23024188eb47-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:48 crc kubenswrapper[4990]: I1205 01:34:48.590873 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vbpct" event={"ID":"4839e707-1591-41e8-8bc3-23024188eb47","Type":"ContainerDied","Data":"07b80ca2816f1e31488d9d5c507d6c3aa16cb4081d0e7a0fb1f4fba4cea9364e"} Dec 05 01:34:48 crc kubenswrapper[4990]: I1205 01:34:48.591280 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07b80ca2816f1e31488d9d5c507d6c3aa16cb4081d0e7a0fb1f4fba4cea9364e" Dec 05 01:34:48 crc kubenswrapper[4990]: I1205 01:34:48.590969 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vbpct" Dec 05 01:34:48 crc kubenswrapper[4990]: I1205 01:34:48.789162 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 01:34:48 crc kubenswrapper[4990]: I1205 01:34:48.789382 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9e471e6a-7baf-4f1d-b121-0012f6e036b5" containerName="nova-api-log" containerID="cri-o://4936caf67699dbe33adeeca029dcd647c3040214fe2e779e3631877562e4c24e" gracePeriod=30 Dec 05 01:34:48 crc kubenswrapper[4990]: I1205 01:34:48.789747 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9e471e6a-7baf-4f1d-b121-0012f6e036b5" containerName="nova-api-api" containerID="cri-o://d1f2fd303f5c18ea9d8393e19b59d850ee4eb4e8c7471177f767d3429ae8c096" gracePeriod=30 Dec 05 01:34:48 crc kubenswrapper[4990]: I1205 01:34:48.806467 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 01:34:48 crc kubenswrapper[4990]: I1205 01:34:48.806711 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4da319ab-4e7d-4159-b2e8-6cdb92838859" containerName="nova-scheduler-scheduler" containerID="cri-o://75cade813ce74a77438f758cdd051dc195af84f1016f58485d9c50a064c859f0" gracePeriod=30 Dec 05 01:34:48 crc kubenswrapper[4990]: I1205 01:34:48.833143 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 01:34:48 crc kubenswrapper[4990]: I1205 01:34:48.834795 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f6df8299-697f-4c4e-a0f6-821ff4261eb2" containerName="nova-metadata-metadata" containerID="cri-o://a83af7cea079fbdb14759ffdc7053823aa6cb90b13718f66c10ac757ffe51b20" gracePeriod=30 Dec 05 01:34:48 crc kubenswrapper[4990]: I1205 01:34:48.835834 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f6df8299-697f-4c4e-a0f6-821ff4261eb2" containerName="nova-metadata-log" containerID="cri-o://95fdc2e2a069e3ea5d79973332eacf244af04019c0840d30d89d61965373560d" gracePeriod=30 Dec 05 01:34:49 crc kubenswrapper[4990]: E1205 01:34:49.117066 4990 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e471e6a_7baf_4f1d_b121_0012f6e036b5.slice/crio-d1f2fd303f5c18ea9d8393e19b59d850ee4eb4e8c7471177f767d3429ae8c096.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e471e6a_7baf_4f1d_b121_0012f6e036b5.slice/crio-conmon-d1f2fd303f5c18ea9d8393e19b59d850ee4eb4e8c7471177f767d3429ae8c096.scope\": RecentStats: unable to find data in memory cache]" Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.389709 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.571946 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e471e6a-7baf-4f1d-b121-0012f6e036b5-public-tls-certs\") pod \"9e471e6a-7baf-4f1d-b121-0012f6e036b5\" (UID: \"9e471e6a-7baf-4f1d-b121-0012f6e036b5\") " Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.572221 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqlh4\" (UniqueName: \"kubernetes.io/projected/9e471e6a-7baf-4f1d-b121-0012f6e036b5-kube-api-access-lqlh4\") pod \"9e471e6a-7baf-4f1d-b121-0012f6e036b5\" (UID: \"9e471e6a-7baf-4f1d-b121-0012f6e036b5\") " Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.572338 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e471e6a-7baf-4f1d-b121-0012f6e036b5-combined-ca-bundle\") pod \"9e471e6a-7baf-4f1d-b121-0012f6e036b5\" (UID: \"9e471e6a-7baf-4f1d-b121-0012f6e036b5\") " Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.572527 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e471e6a-7baf-4f1d-b121-0012f6e036b5-logs\") pod \"9e471e6a-7baf-4f1d-b121-0012f6e036b5\" (UID: \"9e471e6a-7baf-4f1d-b121-0012f6e036b5\") " Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.572640 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e471e6a-7baf-4f1d-b121-0012f6e036b5-config-data\") pod \"9e471e6a-7baf-4f1d-b121-0012f6e036b5\" (UID: \"9e471e6a-7baf-4f1d-b121-0012f6e036b5\") " Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.572754 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e471e6a-7baf-4f1d-b121-0012f6e036b5-internal-tls-certs\") pod \"9e471e6a-7baf-4f1d-b121-0012f6e036b5\" (UID: \"9e471e6a-7baf-4f1d-b121-0012f6e036b5\") " Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.572827 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e471e6a-7baf-4f1d-b121-0012f6e036b5-logs" (OuterVolumeSpecName: "logs") pod "9e471e6a-7baf-4f1d-b121-0012f6e036b5" (UID: "9e471e6a-7baf-4f1d-b121-0012f6e036b5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.577644 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e471e6a-7baf-4f1d-b121-0012f6e036b5-kube-api-access-lqlh4" (OuterVolumeSpecName: "kube-api-access-lqlh4") pod "9e471e6a-7baf-4f1d-b121-0012f6e036b5" (UID: "9e471e6a-7baf-4f1d-b121-0012f6e036b5"). InnerVolumeSpecName "kube-api-access-lqlh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.608533 4990 generic.go:334] "Generic (PLEG): container finished" podID="f6df8299-697f-4c4e-a0f6-821ff4261eb2" containerID="95fdc2e2a069e3ea5d79973332eacf244af04019c0840d30d89d61965373560d" exitCode=143 Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.608616 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f6df8299-697f-4c4e-a0f6-821ff4261eb2","Type":"ContainerDied","Data":"95fdc2e2a069e3ea5d79973332eacf244af04019c0840d30d89d61965373560d"} Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.611472 4990 generic.go:334] "Generic (PLEG): container finished" podID="9e471e6a-7baf-4f1d-b121-0012f6e036b5" containerID="d1f2fd303f5c18ea9d8393e19b59d850ee4eb4e8c7471177f767d3429ae8c096" exitCode=0 Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.611511 4990 generic.go:334] "Generic (PLEG): container finished" podID="9e471e6a-7baf-4f1d-b121-0012f6e036b5" containerID="4936caf67699dbe33adeeca029dcd647c3040214fe2e779e3631877562e4c24e" exitCode=143 Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.611523 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e471e6a-7baf-4f1d-b121-0012f6e036b5","Type":"ContainerDied","Data":"d1f2fd303f5c18ea9d8393e19b59d850ee4eb4e8c7471177f767d3429ae8c096"} Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.611575 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e471e6a-7baf-4f1d-b121-0012f6e036b5","Type":"ContainerDied","Data":"4936caf67699dbe33adeeca029dcd647c3040214fe2e779e3631877562e4c24e"} Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.611586 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e471e6a-7baf-4f1d-b121-0012f6e036b5","Type":"ContainerDied","Data":"153276cb7417de47518a4e99cdf86ad3729dfcac69d1a5345a0f2eab59bd842d"} Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.611603 4990 scope.go:117] "RemoveContainer" containerID="d1f2fd303f5c18ea9d8393e19b59d850ee4eb4e8c7471177f767d3429ae8c096" Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.611614 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.615870 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e471e6a-7baf-4f1d-b121-0012f6e036b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e471e6a-7baf-4f1d-b121-0012f6e036b5" (UID: "9e471e6a-7baf-4f1d-b121-0012f6e036b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.621886 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e471e6a-7baf-4f1d-b121-0012f6e036b5-config-data" (OuterVolumeSpecName: "config-data") pod "9e471e6a-7baf-4f1d-b121-0012f6e036b5" (UID: "9e471e6a-7baf-4f1d-b121-0012f6e036b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.625071 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e471e6a-7baf-4f1d-b121-0012f6e036b5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9e471e6a-7baf-4f1d-b121-0012f6e036b5" (UID: "9e471e6a-7baf-4f1d-b121-0012f6e036b5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.635465 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e471e6a-7baf-4f1d-b121-0012f6e036b5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9e471e6a-7baf-4f1d-b121-0012f6e036b5" (UID: "9e471e6a-7baf-4f1d-b121-0012f6e036b5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.640344 4990 scope.go:117] "RemoveContainer" containerID="4936caf67699dbe33adeeca029dcd647c3040214fe2e779e3631877562e4c24e" Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.658714 4990 scope.go:117] "RemoveContainer" containerID="d1f2fd303f5c18ea9d8393e19b59d850ee4eb4e8c7471177f767d3429ae8c096" Dec 05 01:34:49 crc kubenswrapper[4990]: E1205 01:34:49.659916 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1f2fd303f5c18ea9d8393e19b59d850ee4eb4e8c7471177f767d3429ae8c096\": container with ID starting with d1f2fd303f5c18ea9d8393e19b59d850ee4eb4e8c7471177f767d3429ae8c096 not found: ID does not exist" containerID="d1f2fd303f5c18ea9d8393e19b59d850ee4eb4e8c7471177f767d3429ae8c096" Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.659949 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1f2fd303f5c18ea9d8393e19b59d850ee4eb4e8c7471177f767d3429ae8c096"} err="failed to get container status \"d1f2fd303f5c18ea9d8393e19b59d850ee4eb4e8c7471177f767d3429ae8c096\": rpc error: code = NotFound desc = could not find container \"d1f2fd303f5c18ea9d8393e19b59d850ee4eb4e8c7471177f767d3429ae8c096\": container with ID starting with d1f2fd303f5c18ea9d8393e19b59d850ee4eb4e8c7471177f767d3429ae8c096 not found: ID does not exist" Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.659971 4990 scope.go:117] "RemoveContainer" containerID="4936caf67699dbe33adeeca029dcd647c3040214fe2e779e3631877562e4c24e" Dec 05 01:34:49 crc kubenswrapper[4990]: E1205 01:34:49.660343 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4936caf67699dbe33adeeca029dcd647c3040214fe2e779e3631877562e4c24e\": container with ID starting with 4936caf67699dbe33adeeca029dcd647c3040214fe2e779e3631877562e4c24e not found: ID does not exist" containerID="4936caf67699dbe33adeeca029dcd647c3040214fe2e779e3631877562e4c24e" Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.660381 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4936caf67699dbe33adeeca029dcd647c3040214fe2e779e3631877562e4c24e"} err="failed to get container status \"4936caf67699dbe33adeeca029dcd647c3040214fe2e779e3631877562e4c24e\": rpc error: code = NotFound desc = could not find container \"4936caf67699dbe33adeeca029dcd647c3040214fe2e779e3631877562e4c24e\": container with ID starting with 4936caf67699dbe33adeeca029dcd647c3040214fe2e779e3631877562e4c24e not found: ID does not exist" Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.660408 4990 scope.go:117] "RemoveContainer" containerID="d1f2fd303f5c18ea9d8393e19b59d850ee4eb4e8c7471177f767d3429ae8c096" Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.660760 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1f2fd303f5c18ea9d8393e19b59d850ee4eb4e8c7471177f767d3429ae8c096"} err="failed to get container status \"d1f2fd303f5c18ea9d8393e19b59d850ee4eb4e8c7471177f767d3429ae8c096\": rpc error: code = NotFound desc = could not find container \"d1f2fd303f5c18ea9d8393e19b59d850ee4eb4e8c7471177f767d3429ae8c096\": container with ID starting with d1f2fd303f5c18ea9d8393e19b59d850ee4eb4e8c7471177f767d3429ae8c096 not found: ID does not exist" Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.660785 4990 scope.go:117] "RemoveContainer" containerID="4936caf67699dbe33adeeca029dcd647c3040214fe2e779e3631877562e4c24e" Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.661119 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4936caf67699dbe33adeeca029dcd647c3040214fe2e779e3631877562e4c24e"} err="failed to get container status \"4936caf67699dbe33adeeca029dcd647c3040214fe2e779e3631877562e4c24e\": rpc error: code = NotFound desc = could not find container \"4936caf67699dbe33adeeca029dcd647c3040214fe2e779e3631877562e4c24e\": container with ID starting with 4936caf67699dbe33adeeca029dcd647c3040214fe2e779e3631877562e4c24e not found: ID does not exist" Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.675107 4990 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e471e6a-7baf-4f1d-b121-0012f6e036b5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.675134 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqlh4\" (UniqueName: \"kubernetes.io/projected/9e471e6a-7baf-4f1d-b121-0012f6e036b5-kube-api-access-lqlh4\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.675147 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e471e6a-7baf-4f1d-b121-0012f6e036b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.675157 4990 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e471e6a-7baf-4f1d-b121-0012f6e036b5-logs\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.675167 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e471e6a-7baf-4f1d-b121-0012f6e036b5-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.675175 4990 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e471e6a-7baf-4f1d-b121-0012f6e036b5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:49 crc kubenswrapper[4990]: E1205 01:34:49.718016 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75cade813ce74a77438f758cdd051dc195af84f1016f58485d9c50a064c859f0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 01:34:49 crc kubenswrapper[4990]: E1205 01:34:49.719587 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75cade813ce74a77438f758cdd051dc195af84f1016f58485d9c50a064c859f0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 01:34:49 crc kubenswrapper[4990]: E1205 01:34:49.721528 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75cade813ce74a77438f758cdd051dc195af84f1016f58485d9c50a064c859f0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 01:34:49 crc kubenswrapper[4990]: E1205 01:34:49.721635 4990 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="4da319ab-4e7d-4159-b2e8-6cdb92838859" containerName="nova-scheduler-scheduler" Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.977043 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.990448 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.999584 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 05 01:34:49 crc kubenswrapper[4990]: E1205 01:34:49.999930 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b27ca4f7-1e58-4994-bc4f-1a751f08e628" containerName="dnsmasq-dns" Dec 05 01:34:49 crc kubenswrapper[4990]: I1205 01:34:49.999947 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="b27ca4f7-1e58-4994-bc4f-1a751f08e628" containerName="dnsmasq-dns" Dec 05 01:34:50 crc kubenswrapper[4990]: E1205 01:34:49.999968 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4839e707-1591-41e8-8bc3-23024188eb47" containerName="nova-manage" Dec 05 01:34:50 crc kubenswrapper[4990]: I1205 01:34:49.999976 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="4839e707-1591-41e8-8bc3-23024188eb47" containerName="nova-manage" Dec 05 01:34:50 crc kubenswrapper[4990]: E1205 01:34:49.999990 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e471e6a-7baf-4f1d-b121-0012f6e036b5" containerName="nova-api-api" Dec 05 01:34:50 crc kubenswrapper[4990]: I1205 01:34:49.999996 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e471e6a-7baf-4f1d-b121-0012f6e036b5" containerName="nova-api-api" Dec 05 01:34:50 crc kubenswrapper[4990]: E1205 01:34:50.000014 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e471e6a-7baf-4f1d-b121-0012f6e036b5" containerName="nova-api-log" Dec 05 01:34:50 crc kubenswrapper[4990]: I1205 01:34:50.000020 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e471e6a-7baf-4f1d-b121-0012f6e036b5" containerName="nova-api-log" Dec 05 01:34:50 crc kubenswrapper[4990]: E1205 01:34:50.000039 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b27ca4f7-1e58-4994-bc4f-1a751f08e628" containerName="init" Dec 05 01:34:50 crc kubenswrapper[4990]: I1205 01:34:50.000044 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="b27ca4f7-1e58-4994-bc4f-1a751f08e628" containerName="init" Dec 05 01:34:50 crc kubenswrapper[4990]: I1205 01:34:50.000205 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e471e6a-7baf-4f1d-b121-0012f6e036b5" containerName="nova-api-api" Dec 05 01:34:50 crc kubenswrapper[4990]: I1205 01:34:50.000220 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="4839e707-1591-41e8-8bc3-23024188eb47" containerName="nova-manage" Dec 05 01:34:50 crc kubenswrapper[4990]: I1205 01:34:50.000231 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="b27ca4f7-1e58-4994-bc4f-1a751f08e628" containerName="dnsmasq-dns" Dec 05 01:34:50 crc kubenswrapper[4990]: I1205 01:34:50.000237 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e471e6a-7baf-4f1d-b121-0012f6e036b5" containerName="nova-api-log" Dec 05 01:34:50 crc kubenswrapper[4990]: I1205 01:34:50.001138 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 01:34:50 crc kubenswrapper[4990]: I1205 01:34:50.008729 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 05 01:34:50 crc kubenswrapper[4990]: I1205 01:34:50.010016 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 05 01:34:50 crc kubenswrapper[4990]: I1205 01:34:50.010317 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 05 01:34:50 crc kubenswrapper[4990]: I1205 01:34:50.037081 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 01:34:50 crc kubenswrapper[4990]: I1205 01:34:50.083188 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b5ac2be-fc48-4bde-a668-b3549462a101-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b5ac2be-fc48-4bde-a668-b3549462a101\") " pod="openstack/nova-api-0" Dec 05 01:34:50 crc kubenswrapper[4990]: I1205 01:34:50.083233 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b5ac2be-fc48-4bde-a668-b3549462a101-logs\") pod \"nova-api-0\" (UID: \"4b5ac2be-fc48-4bde-a668-b3549462a101\") " pod="openstack/nova-api-0" Dec 05 01:34:50 crc kubenswrapper[4990]: I1205 01:34:50.083360 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4ds7\" (UniqueName: \"kubernetes.io/projected/4b5ac2be-fc48-4bde-a668-b3549462a101-kube-api-access-r4ds7\") pod \"nova-api-0\" (UID: \"4b5ac2be-fc48-4bde-a668-b3549462a101\") " pod="openstack/nova-api-0" Dec 05 01:34:50 crc kubenswrapper[4990]: I1205 01:34:50.083416 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b5ac2be-fc48-4bde-a668-b3549462a101-config-data\") pod \"nova-api-0\" (UID: \"4b5ac2be-fc48-4bde-a668-b3549462a101\") " pod="openstack/nova-api-0" Dec 05 01:34:50 crc kubenswrapper[4990]: I1205 01:34:50.083457 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b5ac2be-fc48-4bde-a668-b3549462a101-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4b5ac2be-fc48-4bde-a668-b3549462a101\") " pod="openstack/nova-api-0" Dec 05 01:34:50 crc kubenswrapper[4990]: I1205 01:34:50.083528 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b5ac2be-fc48-4bde-a668-b3549462a101-public-tls-certs\") pod \"nova-api-0\" (UID: \"4b5ac2be-fc48-4bde-a668-b3549462a101\") " pod="openstack/nova-api-0" Dec 05 01:34:50 crc kubenswrapper[4990]: I1205 01:34:50.185084 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4ds7\" (UniqueName: \"kubernetes.io/projected/4b5ac2be-fc48-4bde-a668-b3549462a101-kube-api-access-r4ds7\") pod \"nova-api-0\" (UID: \"4b5ac2be-fc48-4bde-a668-b3549462a101\") " pod="openstack/nova-api-0" Dec 05 01:34:50 crc kubenswrapper[4990]: I1205 01:34:50.185172 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b5ac2be-fc48-4bde-a668-b3549462a101-config-data\") pod \"nova-api-0\" (UID: \"4b5ac2be-fc48-4bde-a668-b3549462a101\") " pod="openstack/nova-api-0" Dec 05 01:34:50 crc kubenswrapper[4990]: I1205 01:34:50.185220 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b5ac2be-fc48-4bde-a668-b3549462a101-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4b5ac2be-fc48-4bde-a668-b3549462a101\") " pod="openstack/nova-api-0" Dec 05 01:34:50 crc kubenswrapper[4990]: I1205 01:34:50.185294 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b5ac2be-fc48-4bde-a668-b3549462a101-public-tls-certs\") pod \"nova-api-0\" (UID: \"4b5ac2be-fc48-4bde-a668-b3549462a101\") " pod="openstack/nova-api-0" Dec 05 01:34:50 crc kubenswrapper[4990]: I1205 01:34:50.185342 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b5ac2be-fc48-4bde-a668-b3549462a101-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b5ac2be-fc48-4bde-a668-b3549462a101\") " pod="openstack/nova-api-0" Dec 05 01:34:50 crc kubenswrapper[4990]: I1205 01:34:50.185372 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b5ac2be-fc48-4bde-a668-b3549462a101-logs\") pod \"nova-api-0\" (UID: \"4b5ac2be-fc48-4bde-a668-b3549462a101\") " pod="openstack/nova-api-0" Dec 05 01:34:50 crc kubenswrapper[4990]: I1205 01:34:50.186020 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b5ac2be-fc48-4bde-a668-b3549462a101-logs\") pod \"nova-api-0\" (UID: \"4b5ac2be-fc48-4bde-a668-b3549462a101\") " pod="openstack/nova-api-0" Dec 05 01:34:50 crc kubenswrapper[4990]: I1205 01:34:50.190062 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b5ac2be-fc48-4bde-a668-b3549462a101-public-tls-certs\") pod \"nova-api-0\" (UID: \"4b5ac2be-fc48-4bde-a668-b3549462a101\") " pod="openstack/nova-api-0" Dec 05 01:34:50 crc kubenswrapper[4990]: I1205 01:34:50.190065 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b5ac2be-fc48-4bde-a668-b3549462a101-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4b5ac2be-fc48-4bde-a668-b3549462a101\") " pod="openstack/nova-api-0" Dec 05 01:34:50 crc kubenswrapper[4990]: I1205 01:34:50.190205 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b5ac2be-fc48-4bde-a668-b3549462a101-config-data\") pod \"nova-api-0\" (UID: \"4b5ac2be-fc48-4bde-a668-b3549462a101\") " pod="openstack/nova-api-0" Dec 05 01:34:50 crc kubenswrapper[4990]: I1205 01:34:50.190720 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b5ac2be-fc48-4bde-a668-b3549462a101-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b5ac2be-fc48-4bde-a668-b3549462a101\") " pod="openstack/nova-api-0" Dec 05 01:34:50 crc kubenswrapper[4990]: I1205 01:34:50.212558 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4ds7\" (UniqueName: \"kubernetes.io/projected/4b5ac2be-fc48-4bde-a668-b3549462a101-kube-api-access-r4ds7\") pod \"nova-api-0\" (UID: \"4b5ac2be-fc48-4bde-a668-b3549462a101\") " pod="openstack/nova-api-0" Dec 05 01:34:50 crc kubenswrapper[4990]: I1205 01:34:50.317328 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 01:34:50 crc kubenswrapper[4990]: W1205 01:34:50.791666 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b5ac2be_fc48_4bde_a668_b3549462a101.slice/crio-2ad1ba201f2b8f8454af2e5dd5cdffefad320a749d0a4a06f79652a2ccdc329c WatchSource:0}: Error finding container 2ad1ba201f2b8f8454af2e5dd5cdffefad320a749d0a4a06f79652a2ccdc329c: Status 404 returned error can't find the container with id 2ad1ba201f2b8f8454af2e5dd5cdffefad320a749d0a4a06f79652a2ccdc329c Dec 05 01:34:50 crc kubenswrapper[4990]: I1205 01:34:50.795303 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 01:34:51 crc kubenswrapper[4990]: I1205 01:34:51.648071 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b5ac2be-fc48-4bde-a668-b3549462a101","Type":"ContainerStarted","Data":"95871264dddedee0223bf43470e710503cc4d9eb3d22d3ebef3d08484f77a4e6"} Dec 05 01:34:51 crc kubenswrapper[4990]: I1205 01:34:51.649010 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b5ac2be-fc48-4bde-a668-b3549462a101","Type":"ContainerStarted","Data":"a6c8921d8a3c62aed69725cab2690e2cd5481b1e2d2f6d5631d6f5f5be266d43"} Dec 05 01:34:51 crc kubenswrapper[4990]: I1205 01:34:51.649038 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b5ac2be-fc48-4bde-a668-b3549462a101","Type":"ContainerStarted","Data":"2ad1ba201f2b8f8454af2e5dd5cdffefad320a749d0a4a06f79652a2ccdc329c"} Dec 05 01:34:51 crc kubenswrapper[4990]: I1205 01:34:51.689269 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.689250155 podStartE2EDuration="2.689250155s" podCreationTimestamp="2025-12-05 01:34:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:34:51.67743547 +0000 UTC m=+1590.053650851" watchObservedRunningTime="2025-12-05 01:34:51.689250155 +0000 UTC m=+1590.065465526" Dec 05 01:34:51 crc kubenswrapper[4990]: I1205 01:34:51.950402 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e471e6a-7baf-4f1d-b121-0012f6e036b5" path="/var/lib/kubelet/pods/9e471e6a-7baf-4f1d-b121-0012f6e036b5/volumes" Dec 05 01:34:52 crc kubenswrapper[4990]: I1205 01:34:52.496599 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 01:34:52 crc kubenswrapper[4990]: I1205 01:34:52.643453 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wx4hx\" (UniqueName: \"kubernetes.io/projected/f6df8299-697f-4c4e-a0f6-821ff4261eb2-kube-api-access-wx4hx\") pod \"f6df8299-697f-4c4e-a0f6-821ff4261eb2\" (UID: \"f6df8299-697f-4c4e-a0f6-821ff4261eb2\") " Dec 05 01:34:52 crc kubenswrapper[4990]: I1205 01:34:52.644392 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6df8299-697f-4c4e-a0f6-821ff4261eb2-nova-metadata-tls-certs\") pod \"f6df8299-697f-4c4e-a0f6-821ff4261eb2\" (UID: \"f6df8299-697f-4c4e-a0f6-821ff4261eb2\") " Dec 05 01:34:52 crc kubenswrapper[4990]: I1205 01:34:52.644505 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6df8299-697f-4c4e-a0f6-821ff4261eb2-config-data\") pod \"f6df8299-697f-4c4e-a0f6-821ff4261eb2\" (UID: \"f6df8299-697f-4c4e-a0f6-821ff4261eb2\") " Dec 05 01:34:52 crc kubenswrapper[4990]: I1205 01:34:52.644531 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6df8299-697f-4c4e-a0f6-821ff4261eb2-combined-ca-bundle\") pod \"f6df8299-697f-4c4e-a0f6-821ff4261eb2\" (UID: \"f6df8299-697f-4c4e-a0f6-821ff4261eb2\") " Dec 05 01:34:52 crc kubenswrapper[4990]: I1205 01:34:52.644564 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6df8299-697f-4c4e-a0f6-821ff4261eb2-logs\") pod \"f6df8299-697f-4c4e-a0f6-821ff4261eb2\" (UID: \"f6df8299-697f-4c4e-a0f6-821ff4261eb2\") " Dec 05 01:34:52 crc kubenswrapper[4990]: I1205 01:34:52.645415 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6df8299-697f-4c4e-a0f6-821ff4261eb2-logs" (OuterVolumeSpecName: "logs") pod "f6df8299-697f-4c4e-a0f6-821ff4261eb2" (UID: "f6df8299-697f-4c4e-a0f6-821ff4261eb2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:34:52 crc kubenswrapper[4990]: I1205 01:34:52.654539 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6df8299-697f-4c4e-a0f6-821ff4261eb2-kube-api-access-wx4hx" (OuterVolumeSpecName: "kube-api-access-wx4hx") pod "f6df8299-697f-4c4e-a0f6-821ff4261eb2" (UID: "f6df8299-697f-4c4e-a0f6-821ff4261eb2"). InnerVolumeSpecName "kube-api-access-wx4hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:34:52 crc kubenswrapper[4990]: I1205 01:34:52.659247 4990 generic.go:334] "Generic (PLEG): container finished" podID="f6df8299-697f-4c4e-a0f6-821ff4261eb2" containerID="a83af7cea079fbdb14759ffdc7053823aa6cb90b13718f66c10ac757ffe51b20" exitCode=0 Dec 05 01:34:52 crc kubenswrapper[4990]: I1205 01:34:52.659401 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f6df8299-697f-4c4e-a0f6-821ff4261eb2","Type":"ContainerDied","Data":"a83af7cea079fbdb14759ffdc7053823aa6cb90b13718f66c10ac757ffe51b20"} Dec 05 01:34:52 crc kubenswrapper[4990]: I1205 01:34:52.659448 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f6df8299-697f-4c4e-a0f6-821ff4261eb2","Type":"ContainerDied","Data":"9a5c4130e6707b19c75c1f053256afacb144b2adb6b63c1b01a4fc4292aa65a0"} Dec 05 01:34:52 crc kubenswrapper[4990]: I1205 01:34:52.659529 4990 scope.go:117] "RemoveContainer" containerID="a83af7cea079fbdb14759ffdc7053823aa6cb90b13718f66c10ac757ffe51b20" Dec 05 01:34:52 crc kubenswrapper[4990]: I1205 01:34:52.659823 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 01:34:52 crc kubenswrapper[4990]: I1205 01:34:52.694358 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6df8299-697f-4c4e-a0f6-821ff4261eb2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6df8299-697f-4c4e-a0f6-821ff4261eb2" (UID: "f6df8299-697f-4c4e-a0f6-821ff4261eb2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:52 crc kubenswrapper[4990]: I1205 01:34:52.705281 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6df8299-697f-4c4e-a0f6-821ff4261eb2-config-data" (OuterVolumeSpecName: "config-data") pod "f6df8299-697f-4c4e-a0f6-821ff4261eb2" (UID: "f6df8299-697f-4c4e-a0f6-821ff4261eb2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:52 crc kubenswrapper[4990]: I1205 01:34:52.715628 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6df8299-697f-4c4e-a0f6-821ff4261eb2-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "f6df8299-697f-4c4e-a0f6-821ff4261eb2" (UID: "f6df8299-697f-4c4e-a0f6-821ff4261eb2"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:52 crc kubenswrapper[4990]: I1205 01:34:52.747659 4990 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6df8299-697f-4c4e-a0f6-821ff4261eb2-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:52 crc kubenswrapper[4990]: I1205 01:34:52.747686 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6df8299-697f-4c4e-a0f6-821ff4261eb2-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:52 crc kubenswrapper[4990]: I1205 01:34:52.747695 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6df8299-697f-4c4e-a0f6-821ff4261eb2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:52 crc kubenswrapper[4990]: I1205 01:34:52.747968 4990 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6df8299-697f-4c4e-a0f6-821ff4261eb2-logs\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:52 crc kubenswrapper[4990]: I1205 01:34:52.748069 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wx4hx\" (UniqueName: \"kubernetes.io/projected/f6df8299-697f-4c4e-a0f6-821ff4261eb2-kube-api-access-wx4hx\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:52 crc kubenswrapper[4990]: I1205 01:34:52.769171 4990 scope.go:117] "RemoveContainer" containerID="95fdc2e2a069e3ea5d79973332eacf244af04019c0840d30d89d61965373560d" Dec 05 01:34:52 crc kubenswrapper[4990]: I1205 01:34:52.792515 4990 scope.go:117] "RemoveContainer" containerID="a83af7cea079fbdb14759ffdc7053823aa6cb90b13718f66c10ac757ffe51b20" Dec 05 01:34:52 crc kubenswrapper[4990]: E1205 01:34:52.792969 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a83af7cea079fbdb14759ffdc7053823aa6cb90b13718f66c10ac757ffe51b20\": container with ID starting with a83af7cea079fbdb14759ffdc7053823aa6cb90b13718f66c10ac757ffe51b20 not found: ID does not exist" containerID="a83af7cea079fbdb14759ffdc7053823aa6cb90b13718f66c10ac757ffe51b20" Dec 05 01:34:52 crc kubenswrapper[4990]: I1205 01:34:52.793081 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a83af7cea079fbdb14759ffdc7053823aa6cb90b13718f66c10ac757ffe51b20"} err="failed to get container status \"a83af7cea079fbdb14759ffdc7053823aa6cb90b13718f66c10ac757ffe51b20\": rpc error: code = NotFound desc = could not find container \"a83af7cea079fbdb14759ffdc7053823aa6cb90b13718f66c10ac757ffe51b20\": container with ID starting with a83af7cea079fbdb14759ffdc7053823aa6cb90b13718f66c10ac757ffe51b20 not found: ID does not exist" Dec 05 01:34:52 crc kubenswrapper[4990]: I1205 01:34:52.793182 4990 scope.go:117] "RemoveContainer" containerID="95fdc2e2a069e3ea5d79973332eacf244af04019c0840d30d89d61965373560d" Dec 05 01:34:52 crc kubenswrapper[4990]: E1205 01:34:52.794029 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95fdc2e2a069e3ea5d79973332eacf244af04019c0840d30d89d61965373560d\": container with ID starting with 95fdc2e2a069e3ea5d79973332eacf244af04019c0840d30d89d61965373560d not found: ID does not exist" containerID="95fdc2e2a069e3ea5d79973332eacf244af04019c0840d30d89d61965373560d" Dec 05 01:34:52 crc kubenswrapper[4990]: I1205 01:34:52.794059 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95fdc2e2a069e3ea5d79973332eacf244af04019c0840d30d89d61965373560d"} err="failed to get container status \"95fdc2e2a069e3ea5d79973332eacf244af04019c0840d30d89d61965373560d\": rpc error: code = NotFound desc = could not find container \"95fdc2e2a069e3ea5d79973332eacf244af04019c0840d30d89d61965373560d\": container with ID starting with 95fdc2e2a069e3ea5d79973332eacf244af04019c0840d30d89d61965373560d not found: ID does not exist" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.011737 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.026661 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.037293 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 05 01:34:53 crc kubenswrapper[4990]: E1205 01:34:53.037769 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6df8299-697f-4c4e-a0f6-821ff4261eb2" containerName="nova-metadata-log" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.037794 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6df8299-697f-4c4e-a0f6-821ff4261eb2" containerName="nova-metadata-log" Dec 05 01:34:53 crc kubenswrapper[4990]: E1205 01:34:53.037812 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6df8299-697f-4c4e-a0f6-821ff4261eb2" containerName="nova-metadata-metadata" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.037820 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6df8299-697f-4c4e-a0f6-821ff4261eb2" containerName="nova-metadata-metadata" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.038059 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6df8299-697f-4c4e-a0f6-821ff4261eb2" containerName="nova-metadata-metadata" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.038093 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6df8299-697f-4c4e-a0f6-821ff4261eb2" containerName="nova-metadata-log" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.039259 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.042235 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.042313 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.057810 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.155564 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba3c2a5d-0bec-4905-8cba-d0e565643fe7-config-data\") pod \"nova-metadata-0\" (UID: \"ba3c2a5d-0bec-4905-8cba-d0e565643fe7\") " pod="openstack/nova-metadata-0" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.155639 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba3c2a5d-0bec-4905-8cba-d0e565643fe7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ba3c2a5d-0bec-4905-8cba-d0e565643fe7\") " pod="openstack/nova-metadata-0" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.156021 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba3c2a5d-0bec-4905-8cba-d0e565643fe7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ba3c2a5d-0bec-4905-8cba-d0e565643fe7\") " pod="openstack/nova-metadata-0" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.156139 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba3c2a5d-0bec-4905-8cba-d0e565643fe7-logs\") pod \"nova-metadata-0\" (UID: \"ba3c2a5d-0bec-4905-8cba-d0e565643fe7\") " pod="openstack/nova-metadata-0" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.156249 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6l7m\" (UniqueName: \"kubernetes.io/projected/ba3c2a5d-0bec-4905-8cba-d0e565643fe7-kube-api-access-m6l7m\") pod \"nova-metadata-0\" (UID: \"ba3c2a5d-0bec-4905-8cba-d0e565643fe7\") " pod="openstack/nova-metadata-0" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.259100 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba3c2a5d-0bec-4905-8cba-d0e565643fe7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ba3c2a5d-0bec-4905-8cba-d0e565643fe7\") " pod="openstack/nova-metadata-0" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.259800 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba3c2a5d-0bec-4905-8cba-d0e565643fe7-logs\") pod \"nova-metadata-0\" (UID: \"ba3c2a5d-0bec-4905-8cba-d0e565643fe7\") " pod="openstack/nova-metadata-0" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.259882 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6l7m\" (UniqueName: \"kubernetes.io/projected/ba3c2a5d-0bec-4905-8cba-d0e565643fe7-kube-api-access-m6l7m\") pod \"nova-metadata-0\" (UID: \"ba3c2a5d-0bec-4905-8cba-d0e565643fe7\") " pod="openstack/nova-metadata-0" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.259919 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba3c2a5d-0bec-4905-8cba-d0e565643fe7-config-data\") pod \"nova-metadata-0\" (UID: \"ba3c2a5d-0bec-4905-8cba-d0e565643fe7\") " pod="openstack/nova-metadata-0" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.259974 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba3c2a5d-0bec-4905-8cba-d0e565643fe7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ba3c2a5d-0bec-4905-8cba-d0e565643fe7\") " pod="openstack/nova-metadata-0" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.260616 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba3c2a5d-0bec-4905-8cba-d0e565643fe7-logs\") pod \"nova-metadata-0\" (UID: \"ba3c2a5d-0bec-4905-8cba-d0e565643fe7\") " pod="openstack/nova-metadata-0" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.264197 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba3c2a5d-0bec-4905-8cba-d0e565643fe7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ba3c2a5d-0bec-4905-8cba-d0e565643fe7\") " pod="openstack/nova-metadata-0" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.265529 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba3c2a5d-0bec-4905-8cba-d0e565643fe7-config-data\") pod \"nova-metadata-0\" (UID: \"ba3c2a5d-0bec-4905-8cba-d0e565643fe7\") " pod="openstack/nova-metadata-0" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.266081 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba3c2a5d-0bec-4905-8cba-d0e565643fe7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ba3c2a5d-0bec-4905-8cba-d0e565643fe7\") " pod="openstack/nova-metadata-0" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.286282 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6l7m\" (UniqueName: \"kubernetes.io/projected/ba3c2a5d-0bec-4905-8cba-d0e565643fe7-kube-api-access-m6l7m\") pod \"nova-metadata-0\" (UID: \"ba3c2a5d-0bec-4905-8cba-d0e565643fe7\") " pod="openstack/nova-metadata-0" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.425055 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.608571 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.691729 4990 generic.go:334] "Generic (PLEG): container finished" podID="4da319ab-4e7d-4159-b2e8-6cdb92838859" containerID="75cade813ce74a77438f758cdd051dc195af84f1016f58485d9c50a064c859f0" exitCode=0 Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.691795 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4da319ab-4e7d-4159-b2e8-6cdb92838859","Type":"ContainerDied","Data":"75cade813ce74a77438f758cdd051dc195af84f1016f58485d9c50a064c859f0"} Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.691827 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4da319ab-4e7d-4159-b2e8-6cdb92838859","Type":"ContainerDied","Data":"062d889f538d0b4250ccb1dbc1e7cf0bdc758435be71dacec5f027abd0a6bdeb"} Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.691849 4990 scope.go:117] "RemoveContainer" containerID="75cade813ce74a77438f758cdd051dc195af84f1016f58485d9c50a064c859f0" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.692058 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.770668 4990 scope.go:117] "RemoveContainer" containerID="75cade813ce74a77438f758cdd051dc195af84f1016f58485d9c50a064c859f0" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.772336 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlzrc\" (UniqueName: \"kubernetes.io/projected/4da319ab-4e7d-4159-b2e8-6cdb92838859-kube-api-access-xlzrc\") pod \"4da319ab-4e7d-4159-b2e8-6cdb92838859\" (UID: \"4da319ab-4e7d-4159-b2e8-6cdb92838859\") " Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.772443 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4da319ab-4e7d-4159-b2e8-6cdb92838859-combined-ca-bundle\") pod \"4da319ab-4e7d-4159-b2e8-6cdb92838859\" (UID: \"4da319ab-4e7d-4159-b2e8-6cdb92838859\") " Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.772469 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4da319ab-4e7d-4159-b2e8-6cdb92838859-config-data\") pod \"4da319ab-4e7d-4159-b2e8-6cdb92838859\" (UID: \"4da319ab-4e7d-4159-b2e8-6cdb92838859\") " Dec 05 01:34:53 crc kubenswrapper[4990]: E1205 01:34:53.774697 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75cade813ce74a77438f758cdd051dc195af84f1016f58485d9c50a064c859f0\": container with ID starting with 75cade813ce74a77438f758cdd051dc195af84f1016f58485d9c50a064c859f0 not found: ID does not exist" containerID="75cade813ce74a77438f758cdd051dc195af84f1016f58485d9c50a064c859f0" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.774737 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75cade813ce74a77438f758cdd051dc195af84f1016f58485d9c50a064c859f0"} err="failed to get container status \"75cade813ce74a77438f758cdd051dc195af84f1016f58485d9c50a064c859f0\": rpc error: code = NotFound desc = could not find container \"75cade813ce74a77438f758cdd051dc195af84f1016f58485d9c50a064c859f0\": container with ID starting with 75cade813ce74a77438f758cdd051dc195af84f1016f58485d9c50a064c859f0 not found: ID does not exist" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.782978 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4da319ab-4e7d-4159-b2e8-6cdb92838859-kube-api-access-xlzrc" (OuterVolumeSpecName: "kube-api-access-xlzrc") pod "4da319ab-4e7d-4159-b2e8-6cdb92838859" (UID: "4da319ab-4e7d-4159-b2e8-6cdb92838859"). InnerVolumeSpecName "kube-api-access-xlzrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.842101 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4da319ab-4e7d-4159-b2e8-6cdb92838859-config-data" (OuterVolumeSpecName: "config-data") pod "4da319ab-4e7d-4159-b2e8-6cdb92838859" (UID: "4da319ab-4e7d-4159-b2e8-6cdb92838859"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.848277 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4da319ab-4e7d-4159-b2e8-6cdb92838859-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4da319ab-4e7d-4159-b2e8-6cdb92838859" (UID: "4da319ab-4e7d-4159-b2e8-6cdb92838859"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.876181 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4da319ab-4e7d-4159-b2e8-6cdb92838859-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.876214 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4da319ab-4e7d-4159-b2e8-6cdb92838859-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.876225 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlzrc\" (UniqueName: \"kubernetes.io/projected/4da319ab-4e7d-4159-b2e8-6cdb92838859-kube-api-access-xlzrc\") on node \"crc\" DevicePath \"\"" Dec 05 01:34:53 crc kubenswrapper[4990]: W1205 01:34:53.937448 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba3c2a5d_0bec_4905_8cba_d0e565643fe7.slice/crio-89702e12cd0b1fc59fed2d582dd42159a3c5e067dea8924383f1f91d9cf11400 WatchSource:0}: Error finding container 89702e12cd0b1fc59fed2d582dd42159a3c5e067dea8924383f1f91d9cf11400: Status 404 returned error can't find the container with id 89702e12cd0b1fc59fed2d582dd42159a3c5e067dea8924383f1f91d9cf11400 Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.941130 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6df8299-697f-4c4e-a0f6-821ff4261eb2" path="/var/lib/kubelet/pods/f6df8299-697f-4c4e-a0f6-821ff4261eb2/volumes" Dec 05 01:34:53 crc kubenswrapper[4990]: I1205 01:34:53.941952 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 01:34:54 crc kubenswrapper[4990]: I1205 01:34:54.021037 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 01:34:54 crc kubenswrapper[4990]: I1205 01:34:54.032059 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 01:34:54 crc kubenswrapper[4990]: I1205 01:34:54.049586 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 01:34:54 crc kubenswrapper[4990]: E1205 01:34:54.050185 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4da319ab-4e7d-4159-b2e8-6cdb92838859" containerName="nova-scheduler-scheduler" Dec 05 01:34:54 crc kubenswrapper[4990]: I1205 01:34:54.050216 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="4da319ab-4e7d-4159-b2e8-6cdb92838859" containerName="nova-scheduler-scheduler" Dec 05 01:34:54 crc kubenswrapper[4990]: I1205 01:34:54.050476 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="4da319ab-4e7d-4159-b2e8-6cdb92838859" containerName="nova-scheduler-scheduler" Dec 05 01:34:54 crc kubenswrapper[4990]: I1205 01:34:54.051277 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 01:34:54 crc kubenswrapper[4990]: I1205 01:34:54.053316 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 05 01:34:54 crc kubenswrapper[4990]: I1205 01:34:54.078591 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 01:34:54 crc kubenswrapper[4990]: I1205 01:34:54.182154 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dc80822-8cd5-4004-abdd-160ad6dcdd72-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0dc80822-8cd5-4004-abdd-160ad6dcdd72\") " pod="openstack/nova-scheduler-0" Dec 05 01:34:54 crc kubenswrapper[4990]: I1205 01:34:54.182294 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dc80822-8cd5-4004-abdd-160ad6dcdd72-config-data\") pod \"nova-scheduler-0\" (UID: \"0dc80822-8cd5-4004-abdd-160ad6dcdd72\") " pod="openstack/nova-scheduler-0" Dec 05 01:34:54 crc kubenswrapper[4990]: I1205 01:34:54.182337 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56nkv\" (UniqueName: \"kubernetes.io/projected/0dc80822-8cd5-4004-abdd-160ad6dcdd72-kube-api-access-56nkv\") pod \"nova-scheduler-0\" (UID: \"0dc80822-8cd5-4004-abdd-160ad6dcdd72\") " pod="openstack/nova-scheduler-0" Dec 05 01:34:54 crc kubenswrapper[4990]: I1205 01:34:54.284610 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dc80822-8cd5-4004-abdd-160ad6dcdd72-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0dc80822-8cd5-4004-abdd-160ad6dcdd72\") " pod="openstack/nova-scheduler-0" Dec 05 01:34:54 crc kubenswrapper[4990]: I1205 01:34:54.284808 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dc80822-8cd5-4004-abdd-160ad6dcdd72-config-data\") pod \"nova-scheduler-0\" (UID: \"0dc80822-8cd5-4004-abdd-160ad6dcdd72\") " pod="openstack/nova-scheduler-0" Dec 05 01:34:54 crc kubenswrapper[4990]: I1205 01:34:54.285646 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56nkv\" (UniqueName: \"kubernetes.io/projected/0dc80822-8cd5-4004-abdd-160ad6dcdd72-kube-api-access-56nkv\") pod \"nova-scheduler-0\" (UID: \"0dc80822-8cd5-4004-abdd-160ad6dcdd72\") " pod="openstack/nova-scheduler-0" Dec 05 01:34:54 crc kubenswrapper[4990]: I1205 01:34:54.290453 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dc80822-8cd5-4004-abdd-160ad6dcdd72-config-data\") pod \"nova-scheduler-0\" (UID: \"0dc80822-8cd5-4004-abdd-160ad6dcdd72\") " pod="openstack/nova-scheduler-0" Dec 05 01:34:54 crc kubenswrapper[4990]: I1205 01:34:54.291847 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dc80822-8cd5-4004-abdd-160ad6dcdd72-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0dc80822-8cd5-4004-abdd-160ad6dcdd72\") " pod="openstack/nova-scheduler-0" Dec 05 01:34:54 crc kubenswrapper[4990]: I1205 01:34:54.307728 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56nkv\" (UniqueName: \"kubernetes.io/projected/0dc80822-8cd5-4004-abdd-160ad6dcdd72-kube-api-access-56nkv\") pod \"nova-scheduler-0\" (UID: \"0dc80822-8cd5-4004-abdd-160ad6dcdd72\") " pod="openstack/nova-scheduler-0" Dec 05 01:34:54 crc kubenswrapper[4990]: I1205 01:34:54.564734 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 01:34:54 crc kubenswrapper[4990]: I1205 01:34:54.708438 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ba3c2a5d-0bec-4905-8cba-d0e565643fe7","Type":"ContainerStarted","Data":"ddbc25bf62fa17b335529abe8efcf931bb13fc14cc17b4da59cbfeb8d6d41a3a"} Dec 05 01:34:54 crc kubenswrapper[4990]: I1205 01:34:54.708474 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ba3c2a5d-0bec-4905-8cba-d0e565643fe7","Type":"ContainerStarted","Data":"48ca7bcf7c508929d069e5d6224db21799fee57e82ffabebd9ac3f8157e41ad0"} Dec 05 01:34:54 crc kubenswrapper[4990]: I1205 01:34:54.708499 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ba3c2a5d-0bec-4905-8cba-d0e565643fe7","Type":"ContainerStarted","Data":"89702e12cd0b1fc59fed2d582dd42159a3c5e067dea8924383f1f91d9cf11400"} Dec 05 01:34:54 crc kubenswrapper[4990]: I1205 01:34:54.731050 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.731024984 podStartE2EDuration="1.731024984s" podCreationTimestamp="2025-12-05 01:34:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:34:54.726060253 +0000 UTC m=+1593.102275614" watchObservedRunningTime="2025-12-05 01:34:54.731024984 +0000 UTC m=+1593.107240345" Dec 05 01:34:55 crc kubenswrapper[4990]: I1205 01:34:55.072034 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 01:34:55 crc kubenswrapper[4990]: I1205 01:34:55.721820 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0dc80822-8cd5-4004-abdd-160ad6dcdd72","Type":"ContainerStarted","Data":"8eb62300cc3ccbd37e11d39589f93dfecbfed82d1d1d22eb835f940823d41073"} Dec 05 01:34:55 crc kubenswrapper[4990]: I1205 01:34:55.722158 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0dc80822-8cd5-4004-abdd-160ad6dcdd72","Type":"ContainerStarted","Data":"d8e75cc6e8c2c7a97855fc96c167bf14c5d22c4bb98a4b0e536dd698e0d33804"} Dec 05 01:34:55 crc kubenswrapper[4990]: I1205 01:34:55.763694 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.763654264 podStartE2EDuration="1.763654264s" podCreationTimestamp="2025-12-05 01:34:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:34:55.747533457 +0000 UTC m=+1594.123748848" watchObservedRunningTime="2025-12-05 01:34:55.763654264 +0000 UTC m=+1594.139869665" Dec 05 01:34:55 crc kubenswrapper[4990]: I1205 01:34:55.949138 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4da319ab-4e7d-4159-b2e8-6cdb92838859" path="/var/lib/kubelet/pods/4da319ab-4e7d-4159-b2e8-6cdb92838859/volumes" Dec 05 01:34:57 crc kubenswrapper[4990]: I1205 01:34:57.896039 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mkmcq"] Dec 05 01:34:57 crc kubenswrapper[4990]: I1205 01:34:57.898980 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkmcq" Dec 05 01:34:57 crc kubenswrapper[4990]: I1205 01:34:57.922728 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkmcq"] Dec 05 01:34:58 crc kubenswrapper[4990]: I1205 01:34:58.072579 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48b21222-72d2-46f8-b377-463876e23ea8-utilities\") pod \"redhat-marketplace-mkmcq\" (UID: \"48b21222-72d2-46f8-b377-463876e23ea8\") " pod="openshift-marketplace/redhat-marketplace-mkmcq" Dec 05 01:34:58 crc kubenswrapper[4990]: I1205 01:34:58.072699 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwfgc\" (UniqueName: \"kubernetes.io/projected/48b21222-72d2-46f8-b377-463876e23ea8-kube-api-access-hwfgc\") pod \"redhat-marketplace-mkmcq\" (UID: \"48b21222-72d2-46f8-b377-463876e23ea8\") " pod="openshift-marketplace/redhat-marketplace-mkmcq" Dec 05 01:34:58 crc kubenswrapper[4990]: I1205 01:34:58.073981 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48b21222-72d2-46f8-b377-463876e23ea8-catalog-content\") pod \"redhat-marketplace-mkmcq\" (UID: \"48b21222-72d2-46f8-b377-463876e23ea8\") " pod="openshift-marketplace/redhat-marketplace-mkmcq" Dec 05 01:34:58 crc kubenswrapper[4990]: I1205 01:34:58.176118 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48b21222-72d2-46f8-b377-463876e23ea8-catalog-content\") pod \"redhat-marketplace-mkmcq\" (UID: \"48b21222-72d2-46f8-b377-463876e23ea8\") " pod="openshift-marketplace/redhat-marketplace-mkmcq" Dec 05 01:34:58 crc kubenswrapper[4990]: I1205 01:34:58.176214 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48b21222-72d2-46f8-b377-463876e23ea8-utilities\") pod \"redhat-marketplace-mkmcq\" (UID: \"48b21222-72d2-46f8-b377-463876e23ea8\") " pod="openshift-marketplace/redhat-marketplace-mkmcq" Dec 05 01:34:58 crc kubenswrapper[4990]: I1205 01:34:58.176256 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwfgc\" (UniqueName: \"kubernetes.io/projected/48b21222-72d2-46f8-b377-463876e23ea8-kube-api-access-hwfgc\") pod \"redhat-marketplace-mkmcq\" (UID: \"48b21222-72d2-46f8-b377-463876e23ea8\") " pod="openshift-marketplace/redhat-marketplace-mkmcq" Dec 05 01:34:58 crc kubenswrapper[4990]: I1205 01:34:58.176843 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48b21222-72d2-46f8-b377-463876e23ea8-catalog-content\") pod \"redhat-marketplace-mkmcq\" (UID: \"48b21222-72d2-46f8-b377-463876e23ea8\") " pod="openshift-marketplace/redhat-marketplace-mkmcq" Dec 05 01:34:58 crc kubenswrapper[4990]: I1205 01:34:58.177046 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48b21222-72d2-46f8-b377-463876e23ea8-utilities\") pod \"redhat-marketplace-mkmcq\" (UID: \"48b21222-72d2-46f8-b377-463876e23ea8\") " pod="openshift-marketplace/redhat-marketplace-mkmcq" Dec 05 01:34:58 crc kubenswrapper[4990]: I1205 01:34:58.195611 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwfgc\" (UniqueName: \"kubernetes.io/projected/48b21222-72d2-46f8-b377-463876e23ea8-kube-api-access-hwfgc\") pod \"redhat-marketplace-mkmcq\" (UID: \"48b21222-72d2-46f8-b377-463876e23ea8\") " pod="openshift-marketplace/redhat-marketplace-mkmcq" Dec 05 01:34:58 crc kubenswrapper[4990]: I1205 01:34:58.236880 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkmcq" Dec 05 01:34:58 crc kubenswrapper[4990]: I1205 01:34:58.426191 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 05 01:34:58 crc kubenswrapper[4990]: I1205 01:34:58.426567 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 05 01:34:58 crc kubenswrapper[4990]: I1205 01:34:58.704329 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkmcq"] Dec 05 01:34:58 crc kubenswrapper[4990]: I1205 01:34:58.753621 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkmcq" event={"ID":"48b21222-72d2-46f8-b377-463876e23ea8","Type":"ContainerStarted","Data":"1133d52413c92b6691e0eeef9f71f38f4b736f40b6b48fbfc2d15afe7453d6cc"} Dec 05 01:34:59 crc kubenswrapper[4990]: I1205 01:34:59.565060 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 05 01:34:59 crc kubenswrapper[4990]: I1205 01:34:59.772830 4990 generic.go:334] "Generic (PLEG): container finished" podID="48b21222-72d2-46f8-b377-463876e23ea8" containerID="3c0965b21ad985d93780f6a97289dcefaf68e0e029ce85d629c89fe74ea5ab65" exitCode=0 Dec 05 01:34:59 crc kubenswrapper[4990]: I1205 01:34:59.772909 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkmcq" event={"ID":"48b21222-72d2-46f8-b377-463876e23ea8","Type":"ContainerDied","Data":"3c0965b21ad985d93780f6a97289dcefaf68e0e029ce85d629c89fe74ea5ab65"} Dec 05 01:35:00 crc kubenswrapper[4990]: I1205 01:35:00.318148 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 01:35:00 crc kubenswrapper[4990]: I1205 01:35:00.318236 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 01:35:01 crc kubenswrapper[4990]: I1205 01:35:01.332654 4990 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4b5ac2be-fc48-4bde-a668-b3549462a101" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.197:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 01:35:01 crc kubenswrapper[4990]: I1205 01:35:01.332671 4990 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4b5ac2be-fc48-4bde-a668-b3549462a101" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.197:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 01:35:01 crc kubenswrapper[4990]: I1205 01:35:01.792773 4990 generic.go:334] "Generic (PLEG): container finished" podID="48b21222-72d2-46f8-b377-463876e23ea8" containerID="78d28866b61e3f7bf2ce2aa721e0881eccc8d0684507b132748e43596f559ea8" exitCode=0 Dec 05 01:35:01 crc kubenswrapper[4990]: I1205 01:35:01.792848 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkmcq" event={"ID":"48b21222-72d2-46f8-b377-463876e23ea8","Type":"ContainerDied","Data":"78d28866b61e3f7bf2ce2aa721e0881eccc8d0684507b132748e43596f559ea8"} Dec 05 01:35:02 crc kubenswrapper[4990]: I1205 01:35:02.804361 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkmcq" event={"ID":"48b21222-72d2-46f8-b377-463876e23ea8","Type":"ContainerStarted","Data":"1c3264fbfdec9963cb6527d63557d392d85488648a4c58d0e9494619704fa1a2"} Dec 05 01:35:02 crc kubenswrapper[4990]: I1205 01:35:02.829696 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mkmcq" podStartSLOduration=3.380054012 podStartE2EDuration="5.829672788s" podCreationTimestamp="2025-12-05 01:34:57 +0000 UTC" firstStartedPulling="2025-12-05 01:34:59.777318709 +0000 UTC m=+1598.153534110" lastFinishedPulling="2025-12-05 01:35:02.226937525 +0000 UTC m=+1600.603152886" observedRunningTime="2025-12-05 01:35:02.819776007 +0000 UTC m=+1601.195991378" watchObservedRunningTime="2025-12-05 01:35:02.829672788 +0000 UTC m=+1601.205888149" Dec 05 01:35:03 crc kubenswrapper[4990]: I1205 01:35:03.425922 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 05 01:35:03 crc kubenswrapper[4990]: I1205 01:35:03.425979 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 05 01:35:04 crc kubenswrapper[4990]: I1205 01:35:04.440674 4990 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ba3c2a5d-0bec-4905-8cba-d0e565643fe7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 01:35:04 crc kubenswrapper[4990]: I1205 01:35:04.440706 4990 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ba3c2a5d-0bec-4905-8cba-d0e565643fe7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 01:35:04 crc kubenswrapper[4990]: I1205 01:35:04.565533 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 05 01:35:04 crc kubenswrapper[4990]: I1205 01:35:04.613959 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 05 01:35:04 crc kubenswrapper[4990]: I1205 01:35:04.872758 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 05 01:35:07 crc kubenswrapper[4990]: I1205 01:35:07.853126 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 05 01:35:08 crc kubenswrapper[4990]: I1205 01:35:08.238053 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mkmcq" Dec 05 01:35:08 crc kubenswrapper[4990]: I1205 01:35:08.238420 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mkmcq" Dec 05 01:35:08 crc kubenswrapper[4990]: I1205 01:35:08.326753 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mkmcq" Dec 05 01:35:08 crc kubenswrapper[4990]: I1205 01:35:08.936683 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mkmcq" Dec 05 01:35:08 crc kubenswrapper[4990]: I1205 01:35:08.994184 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkmcq"] Dec 05 01:35:10 crc kubenswrapper[4990]: I1205 01:35:10.334039 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 05 01:35:10 crc kubenswrapper[4990]: I1205 01:35:10.336546 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 05 01:35:10 crc kubenswrapper[4990]: I1205 01:35:10.340019 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 05 01:35:10 crc kubenswrapper[4990]: I1205 01:35:10.347560 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 05 01:35:10 crc kubenswrapper[4990]: I1205 01:35:10.899762 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 05 01:35:10 crc kubenswrapper[4990]: I1205 01:35:10.899769 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mkmcq" podUID="48b21222-72d2-46f8-b377-463876e23ea8" containerName="registry-server" containerID="cri-o://1c3264fbfdec9963cb6527d63557d392d85488648a4c58d0e9494619704fa1a2" gracePeriod=2 Dec 05 01:35:10 crc kubenswrapper[4990]: I1205 01:35:10.911325 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 05 01:35:11 crc kubenswrapper[4990]: I1205 01:35:11.397446 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkmcq" Dec 05 01:35:11 crc kubenswrapper[4990]: I1205 01:35:11.441126 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48b21222-72d2-46f8-b377-463876e23ea8-utilities\") pod \"48b21222-72d2-46f8-b377-463876e23ea8\" (UID: \"48b21222-72d2-46f8-b377-463876e23ea8\") " Dec 05 01:35:11 crc kubenswrapper[4990]: I1205 01:35:11.442194 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwfgc\" (UniqueName: \"kubernetes.io/projected/48b21222-72d2-46f8-b377-463876e23ea8-kube-api-access-hwfgc\") pod \"48b21222-72d2-46f8-b377-463876e23ea8\" (UID: \"48b21222-72d2-46f8-b377-463876e23ea8\") " Dec 05 01:35:11 crc kubenswrapper[4990]: I1205 01:35:11.442323 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48b21222-72d2-46f8-b377-463876e23ea8-catalog-content\") pod \"48b21222-72d2-46f8-b377-463876e23ea8\" (UID: \"48b21222-72d2-46f8-b377-463876e23ea8\") " Dec 05 01:35:11 crc kubenswrapper[4990]: I1205 01:35:11.442654 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48b21222-72d2-46f8-b377-463876e23ea8-utilities" (OuterVolumeSpecName: "utilities") pod "48b21222-72d2-46f8-b377-463876e23ea8" (UID: "48b21222-72d2-46f8-b377-463876e23ea8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:35:11 crc kubenswrapper[4990]: I1205 01:35:11.442949 4990 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48b21222-72d2-46f8-b377-463876e23ea8-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:11 crc kubenswrapper[4990]: I1205 01:35:11.453525 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48b21222-72d2-46f8-b377-463876e23ea8-kube-api-access-hwfgc" (OuterVolumeSpecName: "kube-api-access-hwfgc") pod "48b21222-72d2-46f8-b377-463876e23ea8" (UID: "48b21222-72d2-46f8-b377-463876e23ea8"). InnerVolumeSpecName "kube-api-access-hwfgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:11 crc kubenswrapper[4990]: I1205 01:35:11.473649 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48b21222-72d2-46f8-b377-463876e23ea8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48b21222-72d2-46f8-b377-463876e23ea8" (UID: "48b21222-72d2-46f8-b377-463876e23ea8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:35:11 crc kubenswrapper[4990]: I1205 01:35:11.545834 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwfgc\" (UniqueName: \"kubernetes.io/projected/48b21222-72d2-46f8-b377-463876e23ea8-kube-api-access-hwfgc\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:11 crc kubenswrapper[4990]: I1205 01:35:11.545910 4990 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48b21222-72d2-46f8-b377-463876e23ea8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:11 crc kubenswrapper[4990]: I1205 01:35:11.919102 4990 generic.go:334] "Generic (PLEG): container finished" podID="48b21222-72d2-46f8-b377-463876e23ea8" containerID="1c3264fbfdec9963cb6527d63557d392d85488648a4c58d0e9494619704fa1a2" exitCode=0 Dec 05 01:35:11 crc kubenswrapper[4990]: I1205 01:35:11.919185 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkmcq" event={"ID":"48b21222-72d2-46f8-b377-463876e23ea8","Type":"ContainerDied","Data":"1c3264fbfdec9963cb6527d63557d392d85488648a4c58d0e9494619704fa1a2"} Dec 05 01:35:11 crc kubenswrapper[4990]: I1205 01:35:11.919237 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkmcq" Dec 05 01:35:11 crc kubenswrapper[4990]: I1205 01:35:11.919263 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkmcq" event={"ID":"48b21222-72d2-46f8-b377-463876e23ea8","Type":"ContainerDied","Data":"1133d52413c92b6691e0eeef9f71f38f4b736f40b6b48fbfc2d15afe7453d6cc"} Dec 05 01:35:11 crc kubenswrapper[4990]: I1205 01:35:11.919289 4990 scope.go:117] "RemoveContainer" containerID="1c3264fbfdec9963cb6527d63557d392d85488648a4c58d0e9494619704fa1a2" Dec 05 01:35:11 crc kubenswrapper[4990]: I1205 01:35:11.964717 4990 scope.go:117] "RemoveContainer" containerID="78d28866b61e3f7bf2ce2aa721e0881eccc8d0684507b132748e43596f559ea8" Dec 05 01:35:11 crc kubenswrapper[4990]: I1205 01:35:11.987215 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkmcq"] Dec 05 01:35:11 crc kubenswrapper[4990]: I1205 01:35:11.997108 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkmcq"] Dec 05 01:35:11 crc kubenswrapper[4990]: I1205 01:35:11.999224 4990 scope.go:117] "RemoveContainer" containerID="3c0965b21ad985d93780f6a97289dcefaf68e0e029ce85d629c89fe74ea5ab65" Dec 05 01:35:12 crc kubenswrapper[4990]: I1205 01:35:12.066052 4990 scope.go:117] "RemoveContainer" containerID="1c3264fbfdec9963cb6527d63557d392d85488648a4c58d0e9494619704fa1a2" Dec 05 01:35:12 crc kubenswrapper[4990]: E1205 01:35:12.066746 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c3264fbfdec9963cb6527d63557d392d85488648a4c58d0e9494619704fa1a2\": container with ID starting with 1c3264fbfdec9963cb6527d63557d392d85488648a4c58d0e9494619704fa1a2 not found: ID does not exist" containerID="1c3264fbfdec9963cb6527d63557d392d85488648a4c58d0e9494619704fa1a2" Dec 05 01:35:12 crc kubenswrapper[4990]: I1205 01:35:12.066804 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c3264fbfdec9963cb6527d63557d392d85488648a4c58d0e9494619704fa1a2"} err="failed to get container status \"1c3264fbfdec9963cb6527d63557d392d85488648a4c58d0e9494619704fa1a2\": rpc error: code = NotFound desc = could not find container \"1c3264fbfdec9963cb6527d63557d392d85488648a4c58d0e9494619704fa1a2\": container with ID starting with 1c3264fbfdec9963cb6527d63557d392d85488648a4c58d0e9494619704fa1a2 not found: ID does not exist" Dec 05 01:35:12 crc kubenswrapper[4990]: I1205 01:35:12.066845 4990 scope.go:117] "RemoveContainer" containerID="78d28866b61e3f7bf2ce2aa721e0881eccc8d0684507b132748e43596f559ea8" Dec 05 01:35:12 crc kubenswrapper[4990]: E1205 01:35:12.067426 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78d28866b61e3f7bf2ce2aa721e0881eccc8d0684507b132748e43596f559ea8\": container with ID starting with 78d28866b61e3f7bf2ce2aa721e0881eccc8d0684507b132748e43596f559ea8 not found: ID does not exist" containerID="78d28866b61e3f7bf2ce2aa721e0881eccc8d0684507b132748e43596f559ea8" Dec 05 01:35:12 crc kubenswrapper[4990]: I1205 01:35:12.067500 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78d28866b61e3f7bf2ce2aa721e0881eccc8d0684507b132748e43596f559ea8"} err="failed to get container status \"78d28866b61e3f7bf2ce2aa721e0881eccc8d0684507b132748e43596f559ea8\": rpc error: code = NotFound desc = could not find container \"78d28866b61e3f7bf2ce2aa721e0881eccc8d0684507b132748e43596f559ea8\": container with ID starting with 78d28866b61e3f7bf2ce2aa721e0881eccc8d0684507b132748e43596f559ea8 not found: ID does not exist" Dec 05 01:35:12 crc kubenswrapper[4990]: I1205 01:35:12.067536 4990 scope.go:117] "RemoveContainer" containerID="3c0965b21ad985d93780f6a97289dcefaf68e0e029ce85d629c89fe74ea5ab65" Dec 05 01:35:12 crc kubenswrapper[4990]: E1205 01:35:12.068063 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c0965b21ad985d93780f6a97289dcefaf68e0e029ce85d629c89fe74ea5ab65\": container with ID starting with 3c0965b21ad985d93780f6a97289dcefaf68e0e029ce85d629c89fe74ea5ab65 not found: ID does not exist" containerID="3c0965b21ad985d93780f6a97289dcefaf68e0e029ce85d629c89fe74ea5ab65" Dec 05 01:35:12 crc kubenswrapper[4990]: I1205 01:35:12.068091 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c0965b21ad985d93780f6a97289dcefaf68e0e029ce85d629c89fe74ea5ab65"} err="failed to get container status \"3c0965b21ad985d93780f6a97289dcefaf68e0e029ce85d629c89fe74ea5ab65\": rpc error: code = NotFound desc = could not find container \"3c0965b21ad985d93780f6a97289dcefaf68e0e029ce85d629c89fe74ea5ab65\": container with ID starting with 3c0965b21ad985d93780f6a97289dcefaf68e0e029ce85d629c89fe74ea5ab65 not found: ID does not exist" Dec 05 01:35:13 crc kubenswrapper[4990]: I1205 01:35:13.434789 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 05 01:35:13 crc kubenswrapper[4990]: I1205 01:35:13.435878 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 05 01:35:13 crc kubenswrapper[4990]: I1205 01:35:13.446748 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 05 01:35:13 crc kubenswrapper[4990]: I1205 01:35:13.950257 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48b21222-72d2-46f8-b377-463876e23ea8" path="/var/lib/kubelet/pods/48b21222-72d2-46f8-b377-463876e23ea8/volumes" Dec 05 01:35:13 crc kubenswrapper[4990]: I1205 01:35:13.953262 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 05 01:35:14 crc kubenswrapper[4990]: I1205 01:35:14.795617 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f9rcn"] Dec 05 01:35:14 crc kubenswrapper[4990]: E1205 01:35:14.812713 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48b21222-72d2-46f8-b377-463876e23ea8" containerName="extract-content" Dec 05 01:35:14 crc kubenswrapper[4990]: I1205 01:35:14.812760 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="48b21222-72d2-46f8-b377-463876e23ea8" containerName="extract-content" Dec 05 01:35:14 crc kubenswrapper[4990]: E1205 01:35:14.812791 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48b21222-72d2-46f8-b377-463876e23ea8" containerName="registry-server" Dec 05 01:35:14 crc kubenswrapper[4990]: I1205 01:35:14.812801 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="48b21222-72d2-46f8-b377-463876e23ea8" containerName="registry-server" Dec 05 01:35:14 crc kubenswrapper[4990]: E1205 01:35:14.812851 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48b21222-72d2-46f8-b377-463876e23ea8" containerName="extract-utilities" Dec 05 01:35:14 crc kubenswrapper[4990]: I1205 01:35:14.812863 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="48b21222-72d2-46f8-b377-463876e23ea8" containerName="extract-utilities" Dec 05 01:35:14 crc kubenswrapper[4990]: I1205 01:35:14.813554 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="48b21222-72d2-46f8-b377-463876e23ea8" containerName="registry-server" Dec 05 01:35:14 crc kubenswrapper[4990]: I1205 01:35:14.827564 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f9rcn" Dec 05 01:35:14 crc kubenswrapper[4990]: I1205 01:35:14.851833 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f9rcn"] Dec 05 01:35:14 crc kubenswrapper[4990]: I1205 01:35:14.929549 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5209318-e3f9-4052-bb19-686a221e0fd6-catalog-content\") pod \"community-operators-f9rcn\" (UID: \"b5209318-e3f9-4052-bb19-686a221e0fd6\") " pod="openshift-marketplace/community-operators-f9rcn" Dec 05 01:35:14 crc kubenswrapper[4990]: I1205 01:35:14.929642 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5209318-e3f9-4052-bb19-686a221e0fd6-utilities\") pod \"community-operators-f9rcn\" (UID: \"b5209318-e3f9-4052-bb19-686a221e0fd6\") " pod="openshift-marketplace/community-operators-f9rcn" Dec 05 01:35:14 crc kubenswrapper[4990]: I1205 01:35:14.929688 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njtmj\" (UniqueName: \"kubernetes.io/projected/b5209318-e3f9-4052-bb19-686a221e0fd6-kube-api-access-njtmj\") pod \"community-operators-f9rcn\" (UID: \"b5209318-e3f9-4052-bb19-686a221e0fd6\") " pod="openshift-marketplace/community-operators-f9rcn" Dec 05 01:35:15 crc kubenswrapper[4990]: I1205 01:35:15.031248 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5209318-e3f9-4052-bb19-686a221e0fd6-catalog-content\") pod \"community-operators-f9rcn\" (UID: \"b5209318-e3f9-4052-bb19-686a221e0fd6\") " pod="openshift-marketplace/community-operators-f9rcn" Dec 05 01:35:15 crc kubenswrapper[4990]: I1205 01:35:15.031401 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5209318-e3f9-4052-bb19-686a221e0fd6-utilities\") pod \"community-operators-f9rcn\" (UID: \"b5209318-e3f9-4052-bb19-686a221e0fd6\") " pod="openshift-marketplace/community-operators-f9rcn" Dec 05 01:35:15 crc kubenswrapper[4990]: I1205 01:35:15.031466 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njtmj\" (UniqueName: \"kubernetes.io/projected/b5209318-e3f9-4052-bb19-686a221e0fd6-kube-api-access-njtmj\") pod \"community-operators-f9rcn\" (UID: \"b5209318-e3f9-4052-bb19-686a221e0fd6\") " pod="openshift-marketplace/community-operators-f9rcn" Dec 05 01:35:15 crc kubenswrapper[4990]: I1205 01:35:15.031753 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5209318-e3f9-4052-bb19-686a221e0fd6-catalog-content\") pod \"community-operators-f9rcn\" (UID: \"b5209318-e3f9-4052-bb19-686a221e0fd6\") " pod="openshift-marketplace/community-operators-f9rcn" Dec 05 01:35:15 crc kubenswrapper[4990]: I1205 01:35:15.032161 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5209318-e3f9-4052-bb19-686a221e0fd6-utilities\") pod \"community-operators-f9rcn\" (UID: \"b5209318-e3f9-4052-bb19-686a221e0fd6\") " pod="openshift-marketplace/community-operators-f9rcn" Dec 05 01:35:15 crc kubenswrapper[4990]: I1205 01:35:15.052347 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njtmj\" (UniqueName: \"kubernetes.io/projected/b5209318-e3f9-4052-bb19-686a221e0fd6-kube-api-access-njtmj\") pod \"community-operators-f9rcn\" (UID: \"b5209318-e3f9-4052-bb19-686a221e0fd6\") " pod="openshift-marketplace/community-operators-f9rcn" Dec 05 01:35:15 crc kubenswrapper[4990]: I1205 01:35:15.162003 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f9rcn" Dec 05 01:35:15 crc kubenswrapper[4990]: I1205 01:35:15.691098 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f9rcn"] Dec 05 01:35:15 crc kubenswrapper[4990]: W1205 01:35:15.693890 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5209318_e3f9_4052_bb19_686a221e0fd6.slice/crio-5ef24a84dbf9eb61af8e595095be7548334ff499f370254e6535928da5d034fe WatchSource:0}: Error finding container 5ef24a84dbf9eb61af8e595095be7548334ff499f370254e6535928da5d034fe: Status 404 returned error can't find the container with id 5ef24a84dbf9eb61af8e595095be7548334ff499f370254e6535928da5d034fe Dec 05 01:35:15 crc kubenswrapper[4990]: I1205 01:35:15.970403 4990 generic.go:334] "Generic (PLEG): container finished" podID="b5209318-e3f9-4052-bb19-686a221e0fd6" containerID="3324180cf10d31968aa6d26b7e76976c532cdcdf6aa63a1d4bf145ba0c9dbfac" exitCode=0 Dec 05 01:35:15 crc kubenswrapper[4990]: I1205 01:35:15.970562 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9rcn" event={"ID":"b5209318-e3f9-4052-bb19-686a221e0fd6","Type":"ContainerDied","Data":"3324180cf10d31968aa6d26b7e76976c532cdcdf6aa63a1d4bf145ba0c9dbfac"} Dec 05 01:35:15 crc kubenswrapper[4990]: I1205 01:35:15.970890 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9rcn" event={"ID":"b5209318-e3f9-4052-bb19-686a221e0fd6","Type":"ContainerStarted","Data":"5ef24a84dbf9eb61af8e595095be7548334ff499f370254e6535928da5d034fe"} Dec 05 01:35:16 crc kubenswrapper[4990]: I1205 01:35:16.987223 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9rcn" event={"ID":"b5209318-e3f9-4052-bb19-686a221e0fd6","Type":"ContainerStarted","Data":"b39eefafa71db6e9066d8753cb1fff311b6fa89b439e073d3302e4924735601d"} Dec 05 01:35:18 crc kubenswrapper[4990]: I1205 01:35:18.001851 4990 generic.go:334] "Generic (PLEG): container finished" podID="b5209318-e3f9-4052-bb19-686a221e0fd6" containerID="b39eefafa71db6e9066d8753cb1fff311b6fa89b439e073d3302e4924735601d" exitCode=0 Dec 05 01:35:18 crc kubenswrapper[4990]: I1205 01:35:18.001929 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9rcn" event={"ID":"b5209318-e3f9-4052-bb19-686a221e0fd6","Type":"ContainerDied","Data":"b39eefafa71db6e9066d8753cb1fff311b6fa89b439e073d3302e4924735601d"} Dec 05 01:35:19 crc kubenswrapper[4990]: I1205 01:35:19.028552 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9rcn" event={"ID":"b5209318-e3f9-4052-bb19-686a221e0fd6","Type":"ContainerStarted","Data":"9772c205729b359769c07d2e6e59ee13460625ad40da4574389bf28499115990"} Dec 05 01:35:19 crc kubenswrapper[4990]: I1205 01:35:19.070162 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f9rcn" podStartSLOduration=2.63029723 podStartE2EDuration="5.070124499s" podCreationTimestamp="2025-12-05 01:35:14 +0000 UTC" firstStartedPulling="2025-12-05 01:35:15.972941439 +0000 UTC m=+1614.349156820" lastFinishedPulling="2025-12-05 01:35:18.412768728 +0000 UTC m=+1616.788984089" observedRunningTime="2025-12-05 01:35:19.051908792 +0000 UTC m=+1617.428124203" watchObservedRunningTime="2025-12-05 01:35:19.070124499 +0000 UTC m=+1617.446339900" Dec 05 01:35:21 crc kubenswrapper[4990]: I1205 01:35:21.824701 4990 patch_prober.go:28] interesting pod/machine-config-daemon-zxlh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 01:35:21 crc kubenswrapper[4990]: I1205 01:35:21.825410 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 01:35:25 crc kubenswrapper[4990]: I1205 01:35:25.162343 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f9rcn" Dec 05 01:35:25 crc kubenswrapper[4990]: I1205 01:35:25.163125 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f9rcn" Dec 05 01:35:25 crc kubenswrapper[4990]: I1205 01:35:25.234904 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f9rcn" Dec 05 01:35:26 crc kubenswrapper[4990]: I1205 01:35:26.188262 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f9rcn" Dec 05 01:35:26 crc kubenswrapper[4990]: I1205 01:35:26.253663 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f9rcn"] Dec 05 01:35:28 crc kubenswrapper[4990]: I1205 01:35:28.134969 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-f9rcn" podUID="b5209318-e3f9-4052-bb19-686a221e0fd6" containerName="registry-server" containerID="cri-o://9772c205729b359769c07d2e6e59ee13460625ad40da4574389bf28499115990" gracePeriod=2 Dec 05 01:35:28 crc kubenswrapper[4990]: I1205 01:35:28.639614 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f9rcn" Dec 05 01:35:28 crc kubenswrapper[4990]: I1205 01:35:28.735136 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5209318-e3f9-4052-bb19-686a221e0fd6-utilities\") pod \"b5209318-e3f9-4052-bb19-686a221e0fd6\" (UID: \"b5209318-e3f9-4052-bb19-686a221e0fd6\") " Dec 05 01:35:28 crc kubenswrapper[4990]: I1205 01:35:28.735207 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njtmj\" (UniqueName: \"kubernetes.io/projected/b5209318-e3f9-4052-bb19-686a221e0fd6-kube-api-access-njtmj\") pod \"b5209318-e3f9-4052-bb19-686a221e0fd6\" (UID: \"b5209318-e3f9-4052-bb19-686a221e0fd6\") " Dec 05 01:35:28 crc kubenswrapper[4990]: I1205 01:35:28.735287 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5209318-e3f9-4052-bb19-686a221e0fd6-catalog-content\") pod \"b5209318-e3f9-4052-bb19-686a221e0fd6\" (UID: \"b5209318-e3f9-4052-bb19-686a221e0fd6\") " Dec 05 01:35:28 crc kubenswrapper[4990]: I1205 01:35:28.736375 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5209318-e3f9-4052-bb19-686a221e0fd6-utilities" (OuterVolumeSpecName: "utilities") pod "b5209318-e3f9-4052-bb19-686a221e0fd6" (UID: "b5209318-e3f9-4052-bb19-686a221e0fd6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:35:28 crc kubenswrapper[4990]: I1205 01:35:28.742278 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5209318-e3f9-4052-bb19-686a221e0fd6-kube-api-access-njtmj" (OuterVolumeSpecName: "kube-api-access-njtmj") pod "b5209318-e3f9-4052-bb19-686a221e0fd6" (UID: "b5209318-e3f9-4052-bb19-686a221e0fd6"). InnerVolumeSpecName "kube-api-access-njtmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:28 crc kubenswrapper[4990]: I1205 01:35:28.792104 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5209318-e3f9-4052-bb19-686a221e0fd6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5209318-e3f9-4052-bb19-686a221e0fd6" (UID: "b5209318-e3f9-4052-bb19-686a221e0fd6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:35:28 crc kubenswrapper[4990]: I1205 01:35:28.838573 4990 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5209318-e3f9-4052-bb19-686a221e0fd6-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:28 crc kubenswrapper[4990]: I1205 01:35:28.838632 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njtmj\" (UniqueName: \"kubernetes.io/projected/b5209318-e3f9-4052-bb19-686a221e0fd6-kube-api-access-njtmj\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:28 crc kubenswrapper[4990]: I1205 01:35:28.838654 4990 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5209318-e3f9-4052-bb19-686a221e0fd6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:29 crc kubenswrapper[4990]: I1205 01:35:29.153242 4990 generic.go:334] "Generic (PLEG): container finished" podID="b5209318-e3f9-4052-bb19-686a221e0fd6" containerID="9772c205729b359769c07d2e6e59ee13460625ad40da4574389bf28499115990" exitCode=0 Dec 05 01:35:29 crc kubenswrapper[4990]: I1205 01:35:29.153308 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9rcn" event={"ID":"b5209318-e3f9-4052-bb19-686a221e0fd6","Type":"ContainerDied","Data":"9772c205729b359769c07d2e6e59ee13460625ad40da4574389bf28499115990"} Dec 05 01:35:29 crc kubenswrapper[4990]: I1205 01:35:29.153368 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9rcn" event={"ID":"b5209318-e3f9-4052-bb19-686a221e0fd6","Type":"ContainerDied","Data":"5ef24a84dbf9eb61af8e595095be7548334ff499f370254e6535928da5d034fe"} Dec 05 01:35:29 crc kubenswrapper[4990]: I1205 01:35:29.153399 4990 scope.go:117] "RemoveContainer" containerID="9772c205729b359769c07d2e6e59ee13460625ad40da4574389bf28499115990" Dec 05 01:35:29 crc kubenswrapper[4990]: I1205 01:35:29.153425 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f9rcn" Dec 05 01:35:29 crc kubenswrapper[4990]: I1205 01:35:29.192181 4990 scope.go:117] "RemoveContainer" containerID="b39eefafa71db6e9066d8753cb1fff311b6fa89b439e073d3302e4924735601d" Dec 05 01:35:29 crc kubenswrapper[4990]: I1205 01:35:29.223836 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f9rcn"] Dec 05 01:35:29 crc kubenswrapper[4990]: I1205 01:35:29.236640 4990 scope.go:117] "RemoveContainer" containerID="3324180cf10d31968aa6d26b7e76976c532cdcdf6aa63a1d4bf145ba0c9dbfac" Dec 05 01:35:29 crc kubenswrapper[4990]: I1205 01:35:29.238632 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-f9rcn"] Dec 05 01:35:29 crc kubenswrapper[4990]: I1205 01:35:29.301715 4990 scope.go:117] "RemoveContainer" containerID="9772c205729b359769c07d2e6e59ee13460625ad40da4574389bf28499115990" Dec 05 01:35:29 crc kubenswrapper[4990]: E1205 01:35:29.305691 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9772c205729b359769c07d2e6e59ee13460625ad40da4574389bf28499115990\": container with ID starting with 9772c205729b359769c07d2e6e59ee13460625ad40da4574389bf28499115990 not found: ID does not exist" containerID="9772c205729b359769c07d2e6e59ee13460625ad40da4574389bf28499115990" Dec 05 01:35:29 crc kubenswrapper[4990]: I1205 01:35:29.305769 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9772c205729b359769c07d2e6e59ee13460625ad40da4574389bf28499115990"} err="failed to get container status \"9772c205729b359769c07d2e6e59ee13460625ad40da4574389bf28499115990\": rpc error: code = NotFound desc = could not find container \"9772c205729b359769c07d2e6e59ee13460625ad40da4574389bf28499115990\": container with ID starting with 9772c205729b359769c07d2e6e59ee13460625ad40da4574389bf28499115990 not found: ID does not exist" Dec 05 01:35:29 crc kubenswrapper[4990]: I1205 01:35:29.305810 4990 scope.go:117] "RemoveContainer" containerID="b39eefafa71db6e9066d8753cb1fff311b6fa89b439e073d3302e4924735601d" Dec 05 01:35:29 crc kubenswrapper[4990]: E1205 01:35:29.306441 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b39eefafa71db6e9066d8753cb1fff311b6fa89b439e073d3302e4924735601d\": container with ID starting with b39eefafa71db6e9066d8753cb1fff311b6fa89b439e073d3302e4924735601d not found: ID does not exist" containerID="b39eefafa71db6e9066d8753cb1fff311b6fa89b439e073d3302e4924735601d" Dec 05 01:35:29 crc kubenswrapper[4990]: I1205 01:35:29.306507 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b39eefafa71db6e9066d8753cb1fff311b6fa89b439e073d3302e4924735601d"} err="failed to get container status \"b39eefafa71db6e9066d8753cb1fff311b6fa89b439e073d3302e4924735601d\": rpc error: code = NotFound desc = could not find container \"b39eefafa71db6e9066d8753cb1fff311b6fa89b439e073d3302e4924735601d\": container with ID starting with b39eefafa71db6e9066d8753cb1fff311b6fa89b439e073d3302e4924735601d not found: ID does not exist" Dec 05 01:35:29 crc kubenswrapper[4990]: I1205 01:35:29.306541 4990 scope.go:117] "RemoveContainer" containerID="3324180cf10d31968aa6d26b7e76976c532cdcdf6aa63a1d4bf145ba0c9dbfac" Dec 05 01:35:29 crc kubenswrapper[4990]: E1205 01:35:29.306902 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3324180cf10d31968aa6d26b7e76976c532cdcdf6aa63a1d4bf145ba0c9dbfac\": container with ID starting with 3324180cf10d31968aa6d26b7e76976c532cdcdf6aa63a1d4bf145ba0c9dbfac not found: ID does not exist" containerID="3324180cf10d31968aa6d26b7e76976c532cdcdf6aa63a1d4bf145ba0c9dbfac" Dec 05 01:35:29 crc kubenswrapper[4990]: I1205 01:35:29.306946 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3324180cf10d31968aa6d26b7e76976c532cdcdf6aa63a1d4bf145ba0c9dbfac"} err="failed to get container status \"3324180cf10d31968aa6d26b7e76976c532cdcdf6aa63a1d4bf145ba0c9dbfac\": rpc error: code = NotFound desc = could not find container \"3324180cf10d31968aa6d26b7e76976c532cdcdf6aa63a1d4bf145ba0c9dbfac\": container with ID starting with 3324180cf10d31968aa6d26b7e76976c532cdcdf6aa63a1d4bf145ba0c9dbfac not found: ID does not exist" Dec 05 01:35:29 crc kubenswrapper[4990]: I1205 01:35:29.949084 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5209318-e3f9-4052-bb19-686a221e0fd6" path="/var/lib/kubelet/pods/b5209318-e3f9-4052-bb19-686a221e0fd6/volumes" Dec 05 01:35:33 crc kubenswrapper[4990]: I1205 01:35:33.857469 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 01:35:33 crc kubenswrapper[4990]: I1205 01:35:33.858111 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="9ca5e656-876c-4e87-b049-5c284b211804" containerName="cinder-scheduler" containerID="cri-o://a95fb654a91d91d71d02b450e0f66b15586b768fefea64199189813f9b0c58c3" gracePeriod=30 Dec 05 01:35:33 crc kubenswrapper[4990]: I1205 01:35:33.858235 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="9ca5e656-876c-4e87-b049-5c284b211804" containerName="probe" containerID="cri-o://7fa77ce9f772e359c9e72bc3678b54f2352122e3c3b7171d426cd33a61531716" gracePeriod=30 Dec 05 01:35:33 crc kubenswrapper[4990]: I1205 01:35:33.966863 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7599ccc789-q6ldt"] Dec 05 01:35:33 crc kubenswrapper[4990]: E1205 01:35:33.967229 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5209318-e3f9-4052-bb19-686a221e0fd6" containerName="registry-server" Dec 05 01:35:33 crc kubenswrapper[4990]: I1205 01:35:33.967243 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5209318-e3f9-4052-bb19-686a221e0fd6" containerName="registry-server" Dec 05 01:35:33 crc kubenswrapper[4990]: E1205 01:35:33.967269 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5209318-e3f9-4052-bb19-686a221e0fd6" containerName="extract-utilities" Dec 05 01:35:33 crc kubenswrapper[4990]: I1205 01:35:33.967276 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5209318-e3f9-4052-bb19-686a221e0fd6" containerName="extract-utilities" Dec 05 01:35:33 crc kubenswrapper[4990]: E1205 01:35:33.967287 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5209318-e3f9-4052-bb19-686a221e0fd6" containerName="extract-content" Dec 05 01:35:33 crc kubenswrapper[4990]: I1205 01:35:33.967293 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5209318-e3f9-4052-bb19-686a221e0fd6" containerName="extract-content" Dec 05 01:35:33 crc kubenswrapper[4990]: I1205 01:35:33.967465 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5209318-e3f9-4052-bb19-686a221e0fd6" containerName="registry-server" Dec 05 01:35:33 crc kubenswrapper[4990]: I1205 01:35:33.968347 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7599ccc789-q6ldt" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.001444 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-57775f7b86-mwzx9"] Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.003218 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-57775f7b86-mwzx9" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.034319 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7599ccc789-q6ldt"] Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.053968 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cfb17a8-ecc2-4fa8-85e0-439d19b01b97-combined-ca-bundle\") pod \"barbican-keystone-listener-57775f7b86-mwzx9\" (UID: \"0cfb17a8-ecc2-4fa8-85e0-439d19b01b97\") " pod="openstack/barbican-keystone-listener-57775f7b86-mwzx9" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.054026 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e94da38c-b2d3-4ddb-b032-a6e5bfa62145-config-data-custom\") pod \"barbican-worker-7599ccc789-q6ldt\" (UID: \"e94da38c-b2d3-4ddb-b032-a6e5bfa62145\") " pod="openstack/barbican-worker-7599ccc789-q6ldt" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.054073 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cfb17a8-ecc2-4fa8-85e0-439d19b01b97-config-data\") pod \"barbican-keystone-listener-57775f7b86-mwzx9\" (UID: \"0cfb17a8-ecc2-4fa8-85e0-439d19b01b97\") " pod="openstack/barbican-keystone-listener-57775f7b86-mwzx9" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.054090 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0cfb17a8-ecc2-4fa8-85e0-439d19b01b97-config-data-custom\") pod \"barbican-keystone-listener-57775f7b86-mwzx9\" (UID: \"0cfb17a8-ecc2-4fa8-85e0-439d19b01b97\") " pod="openstack/barbican-keystone-listener-57775f7b86-mwzx9" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.054124 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq5bp\" (UniqueName: \"kubernetes.io/projected/0cfb17a8-ecc2-4fa8-85e0-439d19b01b97-kube-api-access-lq5bp\") pod \"barbican-keystone-listener-57775f7b86-mwzx9\" (UID: \"0cfb17a8-ecc2-4fa8-85e0-439d19b01b97\") " pod="openstack/barbican-keystone-listener-57775f7b86-mwzx9" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.054160 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cfb17a8-ecc2-4fa8-85e0-439d19b01b97-logs\") pod \"barbican-keystone-listener-57775f7b86-mwzx9\" (UID: \"0cfb17a8-ecc2-4fa8-85e0-439d19b01b97\") " pod="openstack/barbican-keystone-listener-57775f7b86-mwzx9" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.054195 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e94da38c-b2d3-4ddb-b032-a6e5bfa62145-config-data\") pod \"barbican-worker-7599ccc789-q6ldt\" (UID: \"e94da38c-b2d3-4ddb-b032-a6e5bfa62145\") " pod="openstack/barbican-worker-7599ccc789-q6ldt" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.054213 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e94da38c-b2d3-4ddb-b032-a6e5bfa62145-logs\") pod \"barbican-worker-7599ccc789-q6ldt\" (UID: \"e94da38c-b2d3-4ddb-b032-a6e5bfa62145\") " pod="openstack/barbican-worker-7599ccc789-q6ldt" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.054245 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e94da38c-b2d3-4ddb-b032-a6e5bfa62145-combined-ca-bundle\") pod \"barbican-worker-7599ccc789-q6ldt\" (UID: \"e94da38c-b2d3-4ddb-b032-a6e5bfa62145\") " pod="openstack/barbican-worker-7599ccc789-q6ldt" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.054266 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb64w\" (UniqueName: \"kubernetes.io/projected/e94da38c-b2d3-4ddb-b032-a6e5bfa62145-kube-api-access-mb64w\") pod \"barbican-worker-7599ccc789-q6ldt\" (UID: \"e94da38c-b2d3-4ddb-b032-a6e5bfa62145\") " pod="openstack/barbican-worker-7599ccc789-q6ldt" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.072985 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-57775f7b86-mwzx9"] Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.173179 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e94da38c-b2d3-4ddb-b032-a6e5bfa62145-config-data-custom\") pod \"barbican-worker-7599ccc789-q6ldt\" (UID: \"e94da38c-b2d3-4ddb-b032-a6e5bfa62145\") " pod="openstack/barbican-worker-7599ccc789-q6ldt" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.173263 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cfb17a8-ecc2-4fa8-85e0-439d19b01b97-config-data\") pod \"barbican-keystone-listener-57775f7b86-mwzx9\" (UID: \"0cfb17a8-ecc2-4fa8-85e0-439d19b01b97\") " pod="openstack/barbican-keystone-listener-57775f7b86-mwzx9" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.173285 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0cfb17a8-ecc2-4fa8-85e0-439d19b01b97-config-data-custom\") pod \"barbican-keystone-listener-57775f7b86-mwzx9\" (UID: \"0cfb17a8-ecc2-4fa8-85e0-439d19b01b97\") " pod="openstack/barbican-keystone-listener-57775f7b86-mwzx9" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.173352 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq5bp\" (UniqueName: \"kubernetes.io/projected/0cfb17a8-ecc2-4fa8-85e0-439d19b01b97-kube-api-access-lq5bp\") pod \"barbican-keystone-listener-57775f7b86-mwzx9\" (UID: \"0cfb17a8-ecc2-4fa8-85e0-439d19b01b97\") " pod="openstack/barbican-keystone-listener-57775f7b86-mwzx9" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.173416 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cfb17a8-ecc2-4fa8-85e0-439d19b01b97-logs\") pod \"barbican-keystone-listener-57775f7b86-mwzx9\" (UID: \"0cfb17a8-ecc2-4fa8-85e0-439d19b01b97\") " pod="openstack/barbican-keystone-listener-57775f7b86-mwzx9" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.173520 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e94da38c-b2d3-4ddb-b032-a6e5bfa62145-config-data\") pod \"barbican-worker-7599ccc789-q6ldt\" (UID: \"e94da38c-b2d3-4ddb-b032-a6e5bfa62145\") " pod="openstack/barbican-worker-7599ccc789-q6ldt" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.173549 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e94da38c-b2d3-4ddb-b032-a6e5bfa62145-logs\") pod \"barbican-worker-7599ccc789-q6ldt\" (UID: \"e94da38c-b2d3-4ddb-b032-a6e5bfa62145\") " pod="openstack/barbican-worker-7599ccc789-q6ldt" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.173600 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e94da38c-b2d3-4ddb-b032-a6e5bfa62145-combined-ca-bundle\") pod \"barbican-worker-7599ccc789-q6ldt\" (UID: \"e94da38c-b2d3-4ddb-b032-a6e5bfa62145\") " pod="openstack/barbican-worker-7599ccc789-q6ldt" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.173632 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb64w\" (UniqueName: \"kubernetes.io/projected/e94da38c-b2d3-4ddb-b032-a6e5bfa62145-kube-api-access-mb64w\") pod \"barbican-worker-7599ccc789-q6ldt\" (UID: \"e94da38c-b2d3-4ddb-b032-a6e5bfa62145\") " pod="openstack/barbican-worker-7599ccc789-q6ldt" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.173711 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cfb17a8-ecc2-4fa8-85e0-439d19b01b97-combined-ca-bundle\") pod \"barbican-keystone-listener-57775f7b86-mwzx9\" (UID: \"0cfb17a8-ecc2-4fa8-85e0-439d19b01b97\") " pod="openstack/barbican-keystone-listener-57775f7b86-mwzx9" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.186517 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0cfb17a8-ecc2-4fa8-85e0-439d19b01b97-config-data-custom\") pod \"barbican-keystone-listener-57775f7b86-mwzx9\" (UID: \"0cfb17a8-ecc2-4fa8-85e0-439d19b01b97\") " pod="openstack/barbican-keystone-listener-57775f7b86-mwzx9" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.188827 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e94da38c-b2d3-4ddb-b032-a6e5bfa62145-logs\") pod \"barbican-worker-7599ccc789-q6ldt\" (UID: \"e94da38c-b2d3-4ddb-b032-a6e5bfa62145\") " pod="openstack/barbican-worker-7599ccc789-q6ldt" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.212092 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cfb17a8-ecc2-4fa8-85e0-439d19b01b97-config-data\") pod \"barbican-keystone-listener-57775f7b86-mwzx9\" (UID: \"0cfb17a8-ecc2-4fa8-85e0-439d19b01b97\") " pod="openstack/barbican-keystone-listener-57775f7b86-mwzx9" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.224230 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cfb17a8-ecc2-4fa8-85e0-439d19b01b97-logs\") pod \"barbican-keystone-listener-57775f7b86-mwzx9\" (UID: \"0cfb17a8-ecc2-4fa8-85e0-439d19b01b97\") " pod="openstack/barbican-keystone-listener-57775f7b86-mwzx9" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.230666 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e94da38c-b2d3-4ddb-b032-a6e5bfa62145-config-data\") pod \"barbican-worker-7599ccc789-q6ldt\" (UID: \"e94da38c-b2d3-4ddb-b032-a6e5bfa62145\") " pod="openstack/barbican-worker-7599ccc789-q6ldt" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.230726 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cfb17a8-ecc2-4fa8-85e0-439d19b01b97-combined-ca-bundle\") pod \"barbican-keystone-listener-57775f7b86-mwzx9\" (UID: \"0cfb17a8-ecc2-4fa8-85e0-439d19b01b97\") " pod="openstack/barbican-keystone-listener-57775f7b86-mwzx9" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.231231 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e94da38c-b2d3-4ddb-b032-a6e5bfa62145-config-data-custom\") pod \"barbican-worker-7599ccc789-q6ldt\" (UID: \"e94da38c-b2d3-4ddb-b032-a6e5bfa62145\") " pod="openstack/barbican-worker-7599ccc789-q6ldt" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.233784 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e94da38c-b2d3-4ddb-b032-a6e5bfa62145-combined-ca-bundle\") pod \"barbican-worker-7599ccc789-q6ldt\" (UID: \"e94da38c-b2d3-4ddb-b032-a6e5bfa62145\") " pod="openstack/barbican-worker-7599ccc789-q6ldt" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.308416 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.308926 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="10384219-030b-491b-884f-fd761eba4496" containerName="cinder-api-log" containerID="cri-o://75a01a2a625f0f4818f4355774ffa785f189c55650887e146da7dd75eb006af5" gracePeriod=30 Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.309015 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="10384219-030b-491b-884f-fd761eba4496" containerName="cinder-api" containerID="cri-o://90907eab6e43a67e9dd116d95af526faeb9a66ef61e70f7e11881689a35c73d5" gracePeriod=30 Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.309820 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq5bp\" (UniqueName: \"kubernetes.io/projected/0cfb17a8-ecc2-4fa8-85e0-439d19b01b97-kube-api-access-lq5bp\") pod \"barbican-keystone-listener-57775f7b86-mwzx9\" (UID: \"0cfb17a8-ecc2-4fa8-85e0-439d19b01b97\") " pod="openstack/barbican-keystone-listener-57775f7b86-mwzx9" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.316524 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-57775f7b86-mwzx9" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.328064 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb64w\" (UniqueName: \"kubernetes.io/projected/e94da38c-b2d3-4ddb-b032-a6e5bfa62145-kube-api-access-mb64w\") pod \"barbican-worker-7599ccc789-q6ldt\" (UID: \"e94da38c-b2d3-4ddb-b032-a6e5bfa62145\") " pod="openstack/barbican-worker-7599ccc789-q6ldt" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.335521 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.358152 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-78f948dd74-zmh7q"] Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.359726 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78f948dd74-zmh7q" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.382672 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-78f948dd74-zmh7q"] Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.403720 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c7241f3-92bb-4295-97d9-4284784b11f3-logs\") pod \"barbican-api-78f948dd74-zmh7q\" (UID: \"2c7241f3-92bb-4295-97d9-4284784b11f3\") " pod="openstack/barbican-api-78f948dd74-zmh7q" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.403763 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-combined-ca-bundle\") pod \"barbican-api-78f948dd74-zmh7q\" (UID: \"2c7241f3-92bb-4295-97d9-4284784b11f3\") " pod="openstack/barbican-api-78f948dd74-zmh7q" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.403809 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-config-data\") pod \"barbican-api-78f948dd74-zmh7q\" (UID: \"2c7241f3-92bb-4295-97d9-4284784b11f3\") " pod="openstack/barbican-api-78f948dd74-zmh7q" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.403855 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-internal-tls-certs\") pod \"barbican-api-78f948dd74-zmh7q\" (UID: \"2c7241f3-92bb-4295-97d9-4284784b11f3\") " pod="openstack/barbican-api-78f948dd74-zmh7q" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.403895 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-config-data-custom\") pod \"barbican-api-78f948dd74-zmh7q\" (UID: \"2c7241f3-92bb-4295-97d9-4284784b11f3\") " pod="openstack/barbican-api-78f948dd74-zmh7q" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.403941 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx8xp\" (UniqueName: \"kubernetes.io/projected/2c7241f3-92bb-4295-97d9-4284784b11f3-kube-api-access-hx8xp\") pod \"barbican-api-78f948dd74-zmh7q\" (UID: \"2c7241f3-92bb-4295-97d9-4284784b11f3\") " pod="openstack/barbican-api-78f948dd74-zmh7q" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.403971 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-public-tls-certs\") pod \"barbican-api-78f948dd74-zmh7q\" (UID: \"2c7241f3-92bb-4295-97d9-4284784b11f3\") " pod="openstack/barbican-api-78f948dd74-zmh7q" Dec 05 01:35:34 crc kubenswrapper[4990]: E1205 01:35:34.404279 4990 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Dec 05 01:35:34 crc kubenswrapper[4990]: E1205 01:35:34.404318 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed473a7a-f068-49a3-ae4c-b57b39e33b28-config-data podName:ed473a7a-f068-49a3-ae4c-b57b39e33b28 nodeName:}" failed. No retries permitted until 2025-12-05 01:35:34.904303142 +0000 UTC m=+1633.280518503 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/ed473a7a-f068-49a3-ae4c-b57b39e33b28-config-data") pod "rabbitmq-cell1-server-0" (UID: "ed473a7a-f068-49a3-ae4c-b57b39e33b28") : configmap "rabbitmq-cell1-config-data" not found Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.446584 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.446791 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="f19ad196-b05b-4ade-ba2b-3b532d447f8e" containerName="openstackclient" containerID="cri-o://125c5c338aa8e9845fa2e8e4ed1dc5dbc6d155571a90f33a2bce9b3578bb0527" gracePeriod=2 Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.464807 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.510121 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-gch4g"] Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.510365 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-gch4g" podUID="92e80556-5f2d-44ed-b165-3211fd50ad98" containerName="openstack-network-exporter" containerID="cri-o://c2e8be780f912a39bffb8d62d08b624b97b1904a55d0be61a24accc3f6874ef6" gracePeriod=30 Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.512369 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c7241f3-92bb-4295-97d9-4284784b11f3-logs\") pod \"barbican-api-78f948dd74-zmh7q\" (UID: \"2c7241f3-92bb-4295-97d9-4284784b11f3\") " pod="openstack/barbican-api-78f948dd74-zmh7q" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.512418 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-combined-ca-bundle\") pod \"barbican-api-78f948dd74-zmh7q\" (UID: \"2c7241f3-92bb-4295-97d9-4284784b11f3\") " pod="openstack/barbican-api-78f948dd74-zmh7q" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.512442 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-config-data\") pod \"barbican-api-78f948dd74-zmh7q\" (UID: \"2c7241f3-92bb-4295-97d9-4284784b11f3\") " pod="openstack/barbican-api-78f948dd74-zmh7q" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.512492 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-internal-tls-certs\") pod \"barbican-api-78f948dd74-zmh7q\" (UID: \"2c7241f3-92bb-4295-97d9-4284784b11f3\") " pod="openstack/barbican-api-78f948dd74-zmh7q" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.512533 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-config-data-custom\") pod \"barbican-api-78f948dd74-zmh7q\" (UID: \"2c7241f3-92bb-4295-97d9-4284784b11f3\") " pod="openstack/barbican-api-78f948dd74-zmh7q" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.512571 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx8xp\" (UniqueName: \"kubernetes.io/projected/2c7241f3-92bb-4295-97d9-4284784b11f3-kube-api-access-hx8xp\") pod \"barbican-api-78f948dd74-zmh7q\" (UID: \"2c7241f3-92bb-4295-97d9-4284784b11f3\") " pod="openstack/barbican-api-78f948dd74-zmh7q" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.512602 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-public-tls-certs\") pod \"barbican-api-78f948dd74-zmh7q\" (UID: \"2c7241f3-92bb-4295-97d9-4284784b11f3\") " pod="openstack/barbican-api-78f948dd74-zmh7q" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.517308 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c7241f3-92bb-4295-97d9-4284784b11f3-logs\") pod \"barbican-api-78f948dd74-zmh7q\" (UID: \"2c7241f3-92bb-4295-97d9-4284784b11f3\") " pod="openstack/barbican-api-78f948dd74-zmh7q" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.534669 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement06fa-account-delete-fb874"] Dec 05 01:35:34 crc kubenswrapper[4990]: E1205 01:35:34.535210 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f19ad196-b05b-4ade-ba2b-3b532d447f8e" containerName="openstackclient" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.535233 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="f19ad196-b05b-4ade-ba2b-3b532d447f8e" containerName="openstackclient" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.535467 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="f19ad196-b05b-4ade-ba2b-3b532d447f8e" containerName="openstackclient" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.536307 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement06fa-account-delete-fb874" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.554186 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-2j9fb"] Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.574036 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement06fa-account-delete-fb874"] Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.588757 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7599ccc789-q6ldt" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.595347 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-nbpzw"] Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.615465 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmp4c\" (UniqueName: \"kubernetes.io/projected/ac1cabc4-d51d-43b6-8903-f098d13c1952-kube-api-access-gmp4c\") pod \"placement06fa-account-delete-fb874\" (UID: \"ac1cabc4-d51d-43b6-8903-f098d13c1952\") " pod="openstack/placement06fa-account-delete-fb874" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.615557 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac1cabc4-d51d-43b6-8903-f098d13c1952-operator-scripts\") pod \"placement06fa-account-delete-fb874\" (UID: \"ac1cabc4-d51d-43b6-8903-f098d13c1952\") " pod="openstack/placement06fa-account-delete-fb874" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.619146 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx8xp\" (UniqueName: \"kubernetes.io/projected/2c7241f3-92bb-4295-97d9-4284784b11f3-kube-api-access-hx8xp\") pod \"barbican-api-78f948dd74-zmh7q\" (UID: \"2c7241f3-92bb-4295-97d9-4284784b11f3\") " pod="openstack/barbican-api-78f948dd74-zmh7q" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.629539 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cindera4a4-account-delete-kpsxn"] Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.631132 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cindera4a4-account-delete-kpsxn" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.635093 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cindera4a4-account-delete-kpsxn"] Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.647933 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-public-tls-certs\") pod \"barbican-api-78f948dd74-zmh7q\" (UID: \"2c7241f3-92bb-4295-97d9-4284784b11f3\") " pod="openstack/barbican-api-78f948dd74-zmh7q" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.648245 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutronc36c-account-delete-nt5gn"] Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.653301 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-config-data\") pod \"barbican-api-78f948dd74-zmh7q\" (UID: \"2c7241f3-92bb-4295-97d9-4284784b11f3\") " pod="openstack/barbican-api-78f948dd74-zmh7q" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.655307 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-combined-ca-bundle\") pod \"barbican-api-78f948dd74-zmh7q\" (UID: \"2c7241f3-92bb-4295-97d9-4284784b11f3\") " pod="openstack/barbican-api-78f948dd74-zmh7q" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.657269 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-config-data-custom\") pod \"barbican-api-78f948dd74-zmh7q\" (UID: \"2c7241f3-92bb-4295-97d9-4284784b11f3\") " pod="openstack/barbican-api-78f948dd74-zmh7q" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.663799 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutronc36c-account-delete-nt5gn" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.684432 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.684684 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="1847f2cb-e2fb-4dc0-8f4b-bf6e43212454" containerName="ovn-northd" containerID="cri-o://77eda99de79f1606c252cb06ece67f2f6f226ccf89000f0de068f41aaab2a00c" gracePeriod=30 Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.684806 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="1847f2cb-e2fb-4dc0-8f4b-bf6e43212454" containerName="openstack-network-exporter" containerID="cri-o://0b1346f688be23b450be3772b345fd19bcc9573a72afce9dae7e5f33c22320e8" gracePeriod=30 Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.689024 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-internal-tls-certs\") pod \"barbican-api-78f948dd74-zmh7q\" (UID: \"2c7241f3-92bb-4295-97d9-4284784b11f3\") " pod="openstack/barbican-api-78f948dd74-zmh7q" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.716382 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutronc36c-account-delete-nt5gn"] Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.717356 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d7643ce-5dd7-48dc-9023-6502e5b0a05a-operator-scripts\") pod \"neutronc36c-account-delete-nt5gn\" (UID: \"0d7643ce-5dd7-48dc-9023-6502e5b0a05a\") " pod="openstack/neutronc36c-account-delete-nt5gn" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.717426 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab630416-46f3-495f-92c2-732abce81632-operator-scripts\") pod \"cindera4a4-account-delete-kpsxn\" (UID: \"ab630416-46f3-495f-92c2-732abce81632\") " pod="openstack/cindera4a4-account-delete-kpsxn" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.717458 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmp4c\" (UniqueName: \"kubernetes.io/projected/ac1cabc4-d51d-43b6-8903-f098d13c1952-kube-api-access-gmp4c\") pod \"placement06fa-account-delete-fb874\" (UID: \"ac1cabc4-d51d-43b6-8903-f098d13c1952\") " pod="openstack/placement06fa-account-delete-fb874" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.717503 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac1cabc4-d51d-43b6-8903-f098d13c1952-operator-scripts\") pod \"placement06fa-account-delete-fb874\" (UID: \"ac1cabc4-d51d-43b6-8903-f098d13c1952\") " pod="openstack/placement06fa-account-delete-fb874" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.717556 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgdvn\" (UniqueName: \"kubernetes.io/projected/0d7643ce-5dd7-48dc-9023-6502e5b0a05a-kube-api-access-cgdvn\") pod \"neutronc36c-account-delete-nt5gn\" (UID: \"0d7643ce-5dd7-48dc-9023-6502e5b0a05a\") " pod="openstack/neutronc36c-account-delete-nt5gn" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.717617 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsvrq\" (UniqueName: \"kubernetes.io/projected/ab630416-46f3-495f-92c2-732abce81632-kube-api-access-dsvrq\") pod \"cindera4a4-account-delete-kpsxn\" (UID: \"ab630416-46f3-495f-92c2-732abce81632\") " pod="openstack/cindera4a4-account-delete-kpsxn" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.722289 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac1cabc4-d51d-43b6-8903-f098d13c1952-operator-scripts\") pod \"placement06fa-account-delete-fb874\" (UID: \"ac1cabc4-d51d-43b6-8903-f098d13c1952\") " pod="openstack/placement06fa-account-delete-fb874" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.776353 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.818937 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgdvn\" (UniqueName: \"kubernetes.io/projected/0d7643ce-5dd7-48dc-9023-6502e5b0a05a-kube-api-access-cgdvn\") pod \"neutronc36c-account-delete-nt5gn\" (UID: \"0d7643ce-5dd7-48dc-9023-6502e5b0a05a\") " pod="openstack/neutronc36c-account-delete-nt5gn" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.819017 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsvrq\" (UniqueName: \"kubernetes.io/projected/ab630416-46f3-495f-92c2-732abce81632-kube-api-access-dsvrq\") pod \"cindera4a4-account-delete-kpsxn\" (UID: \"ab630416-46f3-495f-92c2-732abce81632\") " pod="openstack/cindera4a4-account-delete-kpsxn" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.819054 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d7643ce-5dd7-48dc-9023-6502e5b0a05a-operator-scripts\") pod \"neutronc36c-account-delete-nt5gn\" (UID: \"0d7643ce-5dd7-48dc-9023-6502e5b0a05a\") " pod="openstack/neutronc36c-account-delete-nt5gn" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.819101 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab630416-46f3-495f-92c2-732abce81632-operator-scripts\") pod \"cindera4a4-account-delete-kpsxn\" (UID: \"ab630416-46f3-495f-92c2-732abce81632\") " pod="openstack/cindera4a4-account-delete-kpsxn" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.819839 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab630416-46f3-495f-92c2-732abce81632-operator-scripts\") pod \"cindera4a4-account-delete-kpsxn\" (UID: \"ab630416-46f3-495f-92c2-732abce81632\") " pod="openstack/cindera4a4-account-delete-kpsxn" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.820651 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d7643ce-5dd7-48dc-9023-6502e5b0a05a-operator-scripts\") pod \"neutronc36c-account-delete-nt5gn\" (UID: \"0d7643ce-5dd7-48dc-9023-6502e5b0a05a\") " pod="openstack/neutronc36c-account-delete-nt5gn" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.829827 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-2qddb"] Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.872165 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-2qddb"] Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.893238 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-n9mqs"] Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.893959 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="2c281c58-a95e-4669-bdfc-465759817928" containerName="galera" probeResult="failure" output="command timed out" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.904068 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsvrq\" (UniqueName: \"kubernetes.io/projected/ab630416-46f3-495f-92c2-732abce81632-kube-api-access-dsvrq\") pod \"cindera4a4-account-delete-kpsxn\" (UID: \"ab630416-46f3-495f-92c2-732abce81632\") " pod="openstack/cindera4a4-account-delete-kpsxn" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.912414 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-n9mqs"] Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.915239 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmp4c\" (UniqueName: \"kubernetes.io/projected/ac1cabc4-d51d-43b6-8903-f098d13c1952-kube-api-access-gmp4c\") pod \"placement06fa-account-delete-fb874\" (UID: \"ac1cabc4-d51d-43b6-8903-f098d13c1952\") " pod="openstack/placement06fa-account-delete-fb874" Dec 05 01:35:34 crc kubenswrapper[4990]: E1205 01:35:34.920436 4990 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Dec 05 01:35:34 crc kubenswrapper[4990]: E1205 01:35:34.920509 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed473a7a-f068-49a3-ae4c-b57b39e33b28-config-data podName:ed473a7a-f068-49a3-ae4c-b57b39e33b28 nodeName:}" failed. No retries permitted until 2025-12-05 01:35:35.920478709 +0000 UTC m=+1634.296694070 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/ed473a7a-f068-49a3-ae4c-b57b39e33b28-config-data") pod "rabbitmq-cell1-server-0" (UID: "ed473a7a-f068-49a3-ae4c-b57b39e33b28") : configmap "rabbitmq-cell1-config-data" not found Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.926292 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgdvn\" (UniqueName: \"kubernetes.io/projected/0d7643ce-5dd7-48dc-9023-6502e5b0a05a-kube-api-access-cgdvn\") pod \"neutronc36c-account-delete-nt5gn\" (UID: \"0d7643ce-5dd7-48dc-9023-6502e5b0a05a\") " pod="openstack/neutronc36c-account-delete-nt5gn" Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.930253 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.930667 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="f4475723-8c01-483c-991d-d686c6361021" containerName="openstack-network-exporter" containerID="cri-o://bad858e159bf07ff0d3caac7ac673c248bb0725f3fd9fe9254369591a53861cf" gracePeriod=300 Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.957352 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-fr28q"] Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.970263 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-fr28q"] Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.978526 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 05 01:35:34 crc kubenswrapper[4990]: I1205 01:35:34.978880 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="7c137d1b-6433-40ac-8036-84313eef1967" containerName="openstack-network-exporter" containerID="cri-o://31641ed3f43c47e4a2506c3dcbee8f9106ba3dd06dc5c1aae7406895b483339c" gracePeriod=300 Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.018180 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-56hk7"] Dec 05 01:35:35 crc kubenswrapper[4990]: E1205 01:35:35.039961 4990 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Dec 05 01:35:35 crc kubenswrapper[4990]: E1205 01:35:35.040026 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/809c1920-3205-411c-a8c1-ed027b7e3b1f-config-data podName:809c1920-3205-411c-a8c1-ed027b7e3b1f nodeName:}" failed. No retries permitted until 2025-12-05 01:35:35.540010552 +0000 UTC m=+1633.916225913 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/809c1920-3205-411c-a8c1-ed027b7e3b1f-config-data") pod "rabbitmq-server-0" (UID: "809c1920-3205-411c-a8c1-ed027b7e3b1f") : configmap "rabbitmq-config-data" not found Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.044561 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-56hk7"] Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.070555 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glancebd6b-account-delete-9bmrw"] Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.072304 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glancebd6b-account-delete-9bmrw" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.189414 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glancebd6b-account-delete-9bmrw"] Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.239197 4990 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/barbican-api-78f948dd74-zmh7q" secret="" err="secret \"barbican-barbican-dockercfg-tvmcj\" not found" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.239298 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78f948dd74-zmh7q" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.246287 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement06fa-account-delete-fb874" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.255216 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-w4dbr"] Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.256042 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-w4dbr" podUID="a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e" containerName="dnsmasq-dns" containerID="cri-o://8d671cb47e36755ea226221a2086110a02ed86ebb73f448d8147ccab642c1517" gracePeriod=10 Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.278649 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="7c137d1b-6433-40ac-8036-84313eef1967" containerName="ovsdbserver-nb" containerID="cri-o://d669b98ffce9d4d7e245b409d79ed12266001924abaaf1749664c34bb9dcf1d8" gracePeriod=300 Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.299842 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbicandd52-account-delete-2dsms"] Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.306665 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbicandd52-account-delete-2dsms" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.309772 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cindera4a4-account-delete-kpsxn" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.327213 4990 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovsdbserver-nb-0" podUID="7c137d1b-6433-40ac-8036-84313eef1967" containerName="ovsdbserver-nb" probeResult="failure" output="" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.333560 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbicandd52-account-delete-2dsms"] Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.343702 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novacell05b09-account-delete-wcpvd"] Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.345930 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell05b09-account-delete-wcpvd" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.353930 4990 generic.go:334] "Generic (PLEG): container finished" podID="1847f2cb-e2fb-4dc0-8f4b-bf6e43212454" containerID="0b1346f688be23b450be3772b345fd19bcc9573a72afce9dae7e5f33c22320e8" exitCode=2 Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.354268 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454","Type":"ContainerDied","Data":"0b1346f688be23b450be3772b345fd19bcc9573a72afce9dae7e5f33c22320e8"} Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.356349 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b325e8cb-5fb2-4543-ad3c-c9f42a4572f0-operator-scripts\") pod \"glancebd6b-account-delete-9bmrw\" (UID: \"b325e8cb-5fb2-4543-ad3c-c9f42a4572f0\") " pod="openstack/glancebd6b-account-delete-9bmrw" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.356506 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64bbbfd0-59f8-4fb6-8761-503cdf8b9f36-operator-scripts\") pod \"barbicandd52-account-delete-2dsms\" (UID: \"64bbbfd0-59f8-4fb6-8761-503cdf8b9f36\") " pod="openstack/barbicandd52-account-delete-2dsms" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.356541 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dpkv\" (UniqueName: \"kubernetes.io/projected/64bbbfd0-59f8-4fb6-8761-503cdf8b9f36-kube-api-access-7dpkv\") pod \"barbicandd52-account-delete-2dsms\" (UID: \"64bbbfd0-59f8-4fb6-8761-503cdf8b9f36\") " pod="openstack/barbicandd52-account-delete-2dsms" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.356561 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcss5\" (UniqueName: \"kubernetes.io/projected/b325e8cb-5fb2-4543-ad3c-c9f42a4572f0-kube-api-access-fcss5\") pod \"glancebd6b-account-delete-9bmrw\" (UID: \"b325e8cb-5fb2-4543-ad3c-c9f42a4572f0\") " pod="openstack/glancebd6b-account-delete-9bmrw" Dec 05 01:35:35 crc kubenswrapper[4990]: E1205 01:35:35.356965 4990 secret.go:188] Couldn't get secret openstack/barbican-config-data: secret "barbican-config-data" not found Dec 05 01:35:35 crc kubenswrapper[4990]: E1205 01:35:35.357011 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-config-data podName:2c7241f3-92bb-4295-97d9-4284784b11f3 nodeName:}" failed. No retries permitted until 2025-12-05 01:35:35.856995282 +0000 UTC m=+1634.233210643 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-config-data") pod "barbican-api-78f948dd74-zmh7q" (UID: "2c7241f3-92bb-4295-97d9-4284784b11f3") : secret "barbican-config-data" not found Dec 05 01:35:35 crc kubenswrapper[4990]: E1205 01:35:35.357353 4990 secret.go:188] Couldn't get secret openstack/barbican-api-config-data: secret "barbican-api-config-data" not found Dec 05 01:35:35 crc kubenswrapper[4990]: E1205 01:35:35.357396 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-config-data-custom podName:2c7241f3-92bb-4295-97d9-4284784b11f3 nodeName:}" failed. No retries permitted until 2025-12-05 01:35:35.857381353 +0000 UTC m=+1634.233596714 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-config-data-custom") pod "barbican-api-78f948dd74-zmh7q" (UID: "2c7241f3-92bb-4295-97d9-4284784b11f3") : secret "barbican-api-config-data" not found Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.359606 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-gch4g_92e80556-5f2d-44ed-b165-3211fd50ad98/openstack-network-exporter/0.log" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.359655 4990 generic.go:334] "Generic (PLEG): container finished" podID="92e80556-5f2d-44ed-b165-3211fd50ad98" containerID="c2e8be780f912a39bffb8d62d08b624b97b1904a55d0be61a24accc3f6874ef6" exitCode=2 Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.359735 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-gch4g" event={"ID":"92e80556-5f2d-44ed-b165-3211fd50ad98","Type":"ContainerDied","Data":"c2e8be780f912a39bffb8d62d08b624b97b1904a55d0be61a24accc3f6874ef6"} Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.371909 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="f4475723-8c01-483c-991d-d686c6361021" containerName="ovsdbserver-sb" containerID="cri-o://eba3c2e58a11f77331043a8b651d18ec1dbce27ca05b7e23324f354e0f09b319" gracePeriod=300 Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.375872 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutronc36c-account-delete-nt5gn" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.381229 4990 generic.go:334] "Generic (PLEG): container finished" podID="10384219-030b-491b-884f-fd761eba4496" containerID="75a01a2a625f0f4818f4355774ffa785f189c55650887e146da7dd75eb006af5" exitCode=143 Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.381257 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"10384219-030b-491b-884f-fd761eba4496","Type":"ContainerDied","Data":"75a01a2a625f0f4818f4355774ffa785f189c55650887e146da7dd75eb006af5"} Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.391794 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell05b09-account-delete-wcpvd"] Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.413940 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-kzf4n"] Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.433094 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-kzf4n"] Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.458514 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcpvz\" (UniqueName: \"kubernetes.io/projected/c7bf2416-2722-4ab6-a022-32116155fa68-kube-api-access-gcpvz\") pod \"novacell05b09-account-delete-wcpvd\" (UID: \"c7bf2416-2722-4ab6-a022-32116155fa68\") " pod="openstack/novacell05b09-account-delete-wcpvd" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.458637 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7bf2416-2722-4ab6-a022-32116155fa68-operator-scripts\") pod \"novacell05b09-account-delete-wcpvd\" (UID: \"c7bf2416-2722-4ab6-a022-32116155fa68\") " pod="openstack/novacell05b09-account-delete-wcpvd" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.458708 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b325e8cb-5fb2-4543-ad3c-c9f42a4572f0-operator-scripts\") pod \"glancebd6b-account-delete-9bmrw\" (UID: \"b325e8cb-5fb2-4543-ad3c-c9f42a4572f0\") " pod="openstack/glancebd6b-account-delete-9bmrw" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.458752 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64bbbfd0-59f8-4fb6-8761-503cdf8b9f36-operator-scripts\") pod \"barbicandd52-account-delete-2dsms\" (UID: \"64bbbfd0-59f8-4fb6-8761-503cdf8b9f36\") " pod="openstack/barbicandd52-account-delete-2dsms" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.458790 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dpkv\" (UniqueName: \"kubernetes.io/projected/64bbbfd0-59f8-4fb6-8761-503cdf8b9f36-kube-api-access-7dpkv\") pod \"barbicandd52-account-delete-2dsms\" (UID: \"64bbbfd0-59f8-4fb6-8761-503cdf8b9f36\") " pod="openstack/barbicandd52-account-delete-2dsms" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.458830 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcss5\" (UniqueName: \"kubernetes.io/projected/b325e8cb-5fb2-4543-ad3c-c9f42a4572f0-kube-api-access-fcss5\") pod \"glancebd6b-account-delete-9bmrw\" (UID: \"b325e8cb-5fb2-4543-ad3c-c9f42a4572f0\") " pod="openstack/glancebd6b-account-delete-9bmrw" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.460349 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b325e8cb-5fb2-4543-ad3c-c9f42a4572f0-operator-scripts\") pod \"glancebd6b-account-delete-9bmrw\" (UID: \"b325e8cb-5fb2-4543-ad3c-c9f42a4572f0\") " pod="openstack/glancebd6b-account-delete-9bmrw" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.460960 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64bbbfd0-59f8-4fb6-8761-503cdf8b9f36-operator-scripts\") pod \"barbicandd52-account-delete-2dsms\" (UID: \"64bbbfd0-59f8-4fb6-8761-503cdf8b9f36\") " pod="openstack/barbicandd52-account-delete-2dsms" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.502316 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-snp9x"] Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.502979 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dpkv\" (UniqueName: \"kubernetes.io/projected/64bbbfd0-59f8-4fb6-8761-503cdf8b9f36-kube-api-access-7dpkv\") pod \"barbicandd52-account-delete-2dsms\" (UID: \"64bbbfd0-59f8-4fb6-8761-503cdf8b9f36\") " pod="openstack/barbicandd52-account-delete-2dsms" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.503166 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcss5\" (UniqueName: \"kubernetes.io/projected/b325e8cb-5fb2-4543-ad3c-c9f42a4572f0-kube-api-access-fcss5\") pod \"glancebd6b-account-delete-9bmrw\" (UID: \"b325e8cb-5fb2-4543-ad3c-c9f42a4572f0\") " pod="openstack/glancebd6b-account-delete-9bmrw" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.561389 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcpvz\" (UniqueName: \"kubernetes.io/projected/c7bf2416-2722-4ab6-a022-32116155fa68-kube-api-access-gcpvz\") pod \"novacell05b09-account-delete-wcpvd\" (UID: \"c7bf2416-2722-4ab6-a022-32116155fa68\") " pod="openstack/novacell05b09-account-delete-wcpvd" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.561533 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7bf2416-2722-4ab6-a022-32116155fa68-operator-scripts\") pod \"novacell05b09-account-delete-wcpvd\" (UID: \"c7bf2416-2722-4ab6-a022-32116155fa68\") " pod="openstack/novacell05b09-account-delete-wcpvd" Dec 05 01:35:35 crc kubenswrapper[4990]: E1205 01:35:35.561646 4990 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Dec 05 01:35:35 crc kubenswrapper[4990]: E1205 01:35:35.561701 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/809c1920-3205-411c-a8c1-ed027b7e3b1f-config-data podName:809c1920-3205-411c-a8c1-ed027b7e3b1f nodeName:}" failed. No retries permitted until 2025-12-05 01:35:36.561683624 +0000 UTC m=+1634.937898985 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/809c1920-3205-411c-a8c1-ed027b7e3b1f-config-data") pod "rabbitmq-server-0" (UID: "809c1920-3205-411c-a8c1-ed027b7e3b1f") : configmap "rabbitmq-config-data" not found Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.562466 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7bf2416-2722-4ab6-a022-32116155fa68-operator-scripts\") pod \"novacell05b09-account-delete-wcpvd\" (UID: \"c7bf2416-2722-4ab6-a022-32116155fa68\") " pod="openstack/novacell05b09-account-delete-wcpvd" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.581731 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-snp9x"] Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.591720 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcpvz\" (UniqueName: \"kubernetes.io/projected/c7bf2416-2722-4ab6-a022-32116155fa68-kube-api-access-gcpvz\") pod \"novacell05b09-account-delete-wcpvd\" (UID: \"c7bf2416-2722-4ab6-a022-32116155fa68\") " pod="openstack/novacell05b09-account-delete-wcpvd" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.591789 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6c5f858c6d-zxwsh"] Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.597030 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6c5f858c6d-zxwsh" podUID="82eb03c9-869c-447d-9b78-b4ef916b59ac" containerName="placement-log" containerID="cri-o://60103cd0845a8ee4ac3a9f4a4b4913b499c5becaac1a1bf97f551b44867160a5" gracePeriod=30 Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.597560 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6c5f858c6d-zxwsh" podUID="82eb03c9-869c-447d-9b78-b4ef916b59ac" containerName="placement-api" containerID="cri-o://5d677ac4cf7f17763cb57fdbd241dbbc43ee718b5236104aeaebafadb2a7637a" gracePeriod=30 Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.628587 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novaapiea53-account-delete-m8d6d"] Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.629831 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapiea53-account-delete-m8d6d" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.639961 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapiea53-account-delete-m8d6d"] Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.656834 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-vbpct"] Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.668173 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-vbpct"] Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.683083 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-tqbr8"] Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.689372 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glancebd6b-account-delete-9bmrw" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.704812 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-tqbr8"] Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.707769 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbicandd52-account-delete-2dsms" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.726980 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7fdcd7bc79-skn69"] Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.744130 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7fdcd7bc79-skn69" podUID="60d8e2e9-244e-48b4-b99f-2606dc492482" containerName="neutron-api" containerID="cri-o://eac148450cb53affd7b9d7676017888bdad6962caf67e03295f4aca0531c7ef4" gracePeriod=30 Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.744258 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7fdcd7bc79-skn69" podUID="60d8e2e9-244e-48b4-b99f-2606dc492482" containerName="neutron-httpd" containerID="cri-o://ce448fae1000038dc6291f65727caed20202326c1f6236af230b2af7ffcb78b2" gracePeriod=30 Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.749552 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell05b09-account-delete-wcpvd" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.766225 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d75t\" (UniqueName: \"kubernetes.io/projected/82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d-kube-api-access-7d75t\") pod \"novaapiea53-account-delete-m8d6d\" (UID: \"82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d\") " pod="openstack/novaapiea53-account-delete-m8d6d" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.766368 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d-operator-scripts\") pod \"novaapiea53-account-delete-m8d6d\" (UID: \"82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d\") " pod="openstack/novaapiea53-account-delete-m8d6d" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.770917 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.771493 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="account-server" containerID="cri-o://8eae0c84a22f2ce5fdda06bbd55b6ab37c6416cffdda260c5a8481964de6b976" gracePeriod=30 Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.772502 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="swift-recon-cron" containerID="cri-o://af96dd281dbcdb124501016399a0267209fa29bd5d56b5b3ffc4e188d564c0e5" gracePeriod=30 Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.772652 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="rsync" containerID="cri-o://4ae06d118eb8ea46cbfca6d00f42445a070e48fa9ccd0ad829f106c51d8ef194" gracePeriod=30 Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.772772 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="object-expirer" containerID="cri-o://6de70efde02107c9fbb1a95e593fa59d651e0b070d0fd80a09b5347e7584c5bb" gracePeriod=30 Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.772900 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="object-updater" containerID="cri-o://0680ec023c6468307885df08d2ac4f1cac0cb88e5ec79fee584fd1c1afbf5efa" gracePeriod=30 Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.773023 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="object-auditor" containerID="cri-o://f2ff93fb8c7c454e35b2bf406258dfcfba4639cb5f8814a7a599d53112389d47" gracePeriod=30 Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.773138 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="object-replicator" containerID="cri-o://26362245e507d8670eba99034d134400bd3ec982033714c91eb70c99f5853337" gracePeriod=30 Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.773278 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="object-server" containerID="cri-o://4fd758b87cec247eba81ade50034153cdcbbc7f7c87bfcd602d23cb9ea1e04cd" gracePeriod=30 Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.773422 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="container-updater" containerID="cri-o://78b96b7b027c73093d95b4e9f8ab42d62565ca6239843604ae8aaba9db2a71e8" gracePeriod=30 Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.773556 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="container-auditor" containerID="cri-o://c71d8495c570c44e8d0d8fce230714f4e17b680d377288202cbad2a9a0e42974" gracePeriod=30 Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.773681 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="container-replicator" containerID="cri-o://9fa775c9f947165ed3ddfe5e6cdc672dfd05e2ff806586425cc43ed13294c90e" gracePeriod=30 Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.773801 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="container-server" containerID="cri-o://d31ccce1b91f8a9a540e5a730943d1aae099c0fe3faf11f048b29a08c7624223" gracePeriod=30 Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.773931 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="account-reaper" containerID="cri-o://9c1833b8d187a25564fdfc86973fed86449ef89dbe4fd79facc4c862e3f434b1" gracePeriod=30 Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.774044 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="account-auditor" containerID="cri-o://ca0beff3bc28af2d1932bdaaacb1378a2e2c8bdb17dbe086bdacfa5a6b716268" gracePeriod=30 Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.774162 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="account-replicator" containerID="cri-o://d560bf232fe1bd37741c0fe18914a2b867d18453e94b3688a301eb8bd39760b1" gracePeriod=30 Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.825094 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.843612 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.843898 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3" containerName="glance-log" containerID="cri-o://82eb58ebe7ffe6cca157c0c411fe2fb1cb6d998e427d96fff54c39ba5fae459b" gracePeriod=30 Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.844403 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3" containerName="glance-httpd" containerID="cri-o://5f490bc39eb1a824091b54123c3705eafc0ddd4d3bb92d574be2d6b179034a7a" gracePeriod=30 Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.868093 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d75t\" (UniqueName: \"kubernetes.io/projected/82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d-kube-api-access-7d75t\") pod \"novaapiea53-account-delete-m8d6d\" (UID: \"82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d\") " pod="openstack/novaapiea53-account-delete-m8d6d" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.868195 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d-operator-scripts\") pod \"novaapiea53-account-delete-m8d6d\" (UID: \"82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d\") " pod="openstack/novaapiea53-account-delete-m8d6d" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.868939 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d-operator-scripts\") pod \"novaapiea53-account-delete-m8d6d\" (UID: \"82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d\") " pod="openstack/novaapiea53-account-delete-m8d6d" Dec 05 01:35:35 crc kubenswrapper[4990]: E1205 01:35:35.869159 4990 secret.go:188] Couldn't get secret openstack/barbican-config-data: secret "barbican-config-data" not found Dec 05 01:35:35 crc kubenswrapper[4990]: E1205 01:35:35.876739 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-config-data podName:2c7241f3-92bb-4295-97d9-4284784b11f3 nodeName:}" failed. No retries permitted until 2025-12-05 01:35:36.876710759 +0000 UTC m=+1635.252926120 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-config-data") pod "barbican-api-78f948dd74-zmh7q" (UID: "2c7241f3-92bb-4295-97d9-4284784b11f3") : secret "barbican-config-data" not found Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.878959 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.879209 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d" containerName="glance-log" containerID="cri-o://d20b6d3367c7e8bc8e6b2c77a261707a5e35a27706e2fb7941de8c18a86ffb76" gracePeriod=30 Dec 05 01:35:35 crc kubenswrapper[4990]: E1205 01:35:35.871678 4990 secret.go:188] Couldn't get secret openstack/barbican-api-config-data: secret "barbican-api-config-data" not found Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.879634 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d" containerName="glance-httpd" containerID="cri-o://fe46232d47a2817ca0f3b8f9049c81973cfa81de8d607b295f93e7368320d630" gracePeriod=30 Dec 05 01:35:35 crc kubenswrapper[4990]: E1205 01:35:35.879684 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-config-data-custom podName:2c7241f3-92bb-4295-97d9-4284784b11f3 nodeName:}" failed. No retries permitted until 2025-12-05 01:35:36.879664993 +0000 UTC m=+1635.255880354 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-config-data-custom") pod "barbican-api-78f948dd74-zmh7q" (UID: "2c7241f3-92bb-4295-97d9-4284784b11f3") : secret "barbican-api-config-data" not found Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.914255 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d75t\" (UniqueName: \"kubernetes.io/projected/82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d-kube-api-access-7d75t\") pod \"novaapiea53-account-delete-m8d6d\" (UID: \"82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d\") " pod="openstack/novaapiea53-account-delete-m8d6d" Dec 05 01:35:35 crc kubenswrapper[4990]: I1205 01:35:35.960051 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="ed473a7a-f068-49a3-ae4c-b57b39e33b28" containerName="rabbitmq" containerID="cri-o://6a124f2ceb58f1b28fd7e33d50fc28756c66696a4774e8efa70e6a53e7a97329" gracePeriod=604800 Dec 05 01:35:35 crc kubenswrapper[4990]: E1205 01:35:35.970091 4990 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Dec 05 01:35:35 crc kubenswrapper[4990]: E1205 01:35:35.976006 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed473a7a-f068-49a3-ae4c-b57b39e33b28-config-data podName:ed473a7a-f068-49a3-ae4c-b57b39e33b28 nodeName:}" failed. No retries permitted until 2025-12-05 01:35:37.975979818 +0000 UTC m=+1636.352195179 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/ed473a7a-f068-49a3-ae4c-b57b39e33b28-config-data") pod "rabbitmq-cell1-server-0" (UID: "ed473a7a-f068-49a3-ae4c-b57b39e33b28") : configmap "rabbitmq-cell1-config-data" not found Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.001174 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04c2a959-3818-40b2-8182-7fa3287ee0df" path="/var/lib/kubelet/pods/04c2a959-3818-40b2-8182-7fa3287ee0df/volumes" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.009749 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11217722-8e69-4028-a7c3-036cfdefcb77" path="/var/lib/kubelet/pods/11217722-8e69-4028-a7c3-036cfdefcb77/volumes" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.012085 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cf00e7d-d396-4719-b077-bd14781d8836" path="/var/lib/kubelet/pods/1cf00e7d-d396-4719-b077-bd14781d8836/volumes" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.012968 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4839e707-1591-41e8-8bc3-23024188eb47" path="/var/lib/kubelet/pods/4839e707-1591-41e8-8bc3-23024188eb47/volumes" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.013602 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59d6536f-e0ed-42d2-9676-40bc88de1473" path="/var/lib/kubelet/pods/59d6536f-e0ed-42d2-9676-40bc88de1473/volumes" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.015619 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfef6189-60ca-4088-97fa-6dc3fb1e1a52" path="/var/lib/kubelet/pods/bfef6189-60ca-4088-97fa-6dc3fb1e1a52/volumes" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.018300 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5be3dfc-61e4-495c-8b0b-22f417664a9c" path="/var/lib/kubelet/pods/d5be3dfc-61e4-495c-8b0b-22f417664a9c/volumes" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.019267 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da5e4277-78a0-4eca-b9f6-67fc6c925ed1" path="/var/lib/kubelet/pods/da5e4277-78a0-4eca-b9f6-67fc6c925ed1/volumes" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.023085 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.023424 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ba3c2a5d-0bec-4905-8cba-d0e565643fe7" containerName="nova-metadata-log" containerID="cri-o://48ca7bcf7c508929d069e5d6224db21799fee57e82ffabebd9ac3f8157e41ad0" gracePeriod=30 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.025122 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ba3c2a5d-0bec-4905-8cba-d0e565643fe7" containerName="nova-metadata-metadata" containerID="cri-o://ddbc25bf62fa17b335529abe8efcf931bb13fc14cc17b4da59cbfeb8d6d41a3a" gracePeriod=30 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.074640 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.077242 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4b5ac2be-fc48-4bde-a668-b3549462a101" containerName="nova-api-log" containerID="cri-o://a6c8921d8a3c62aed69725cab2690e2cd5481b1e2d2f6d5631d6f5f5be266d43" gracePeriod=30 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.077619 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4b5ac2be-fc48-4bde-a668-b3549462a101" containerName="nova-api-api" containerID="cri-o://95871264dddedee0223bf43470e710503cc4d9eb3d22d3ebef3d08484f77a4e6" gracePeriod=30 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.103993 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapiea53-account-delete-m8d6d" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.105599 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.112301 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-gch4g_92e80556-5f2d-44ed-b165-3211fd50ad98/openstack-network-exporter/0.log" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.112356 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-gch4g" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.141978 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-e6ca-account-create-update-wzznq"] Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.196038 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/92e80556-5f2d-44ed-b165-3211fd50ad98-ovn-rundir\") pod \"92e80556-5f2d-44ed-b165-3211fd50ad98\" (UID: \"92e80556-5f2d-44ed-b165-3211fd50ad98\") " Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.196108 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92e80556-5f2d-44ed-b165-3211fd50ad98-config\") pod \"92e80556-5f2d-44ed-b165-3211fd50ad98\" (UID: \"92e80556-5f2d-44ed-b165-3211fd50ad98\") " Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.196231 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/92e80556-5f2d-44ed-b165-3211fd50ad98-ovs-rundir\") pod \"92e80556-5f2d-44ed-b165-3211fd50ad98\" (UID: \"92e80556-5f2d-44ed-b165-3211fd50ad98\") " Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.196262 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdggz\" (UniqueName: \"kubernetes.io/projected/92e80556-5f2d-44ed-b165-3211fd50ad98-kube-api-access-gdggz\") pod \"92e80556-5f2d-44ed-b165-3211fd50ad98\" (UID: \"92e80556-5f2d-44ed-b165-3211fd50ad98\") " Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.196324 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92e80556-5f2d-44ed-b165-3211fd50ad98-combined-ca-bundle\") pod \"92e80556-5f2d-44ed-b165-3211fd50ad98\" (UID: \"92e80556-5f2d-44ed-b165-3211fd50ad98\") " Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.196369 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/92e80556-5f2d-44ed-b165-3211fd50ad98-metrics-certs-tls-certs\") pod \"92e80556-5f2d-44ed-b165-3211fd50ad98\" (UID: \"92e80556-5f2d-44ed-b165-3211fd50ad98\") " Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.196614 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92e80556-5f2d-44ed-b165-3211fd50ad98-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "92e80556-5f2d-44ed-b165-3211fd50ad98" (UID: "92e80556-5f2d-44ed-b165-3211fd50ad98"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.196680 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92e80556-5f2d-44ed-b165-3211fd50ad98-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "92e80556-5f2d-44ed-b165-3211fd50ad98" (UID: "92e80556-5f2d-44ed-b165-3211fd50ad98"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.197176 4990 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/92e80556-5f2d-44ed-b165-3211fd50ad98-ovn-rundir\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.197195 4990 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/92e80556-5f2d-44ed-b165-3211fd50ad98-ovs-rundir\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.197383 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92e80556-5f2d-44ed-b165-3211fd50ad98-config" (OuterVolumeSpecName: "config") pod "92e80556-5f2d-44ed-b165-3211fd50ad98" (UID: "92e80556-5f2d-44ed-b165-3211fd50ad98"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.211552 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-9fvnr"] Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.266250 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92e80556-5f2d-44ed-b165-3211fd50ad98-kube-api-access-gdggz" (OuterVolumeSpecName: "kube-api-access-gdggz") pod "92e80556-5f2d-44ed-b165-3211fd50ad98" (UID: "92e80556-5f2d-44ed-b165-3211fd50ad98"). InnerVolumeSpecName "kube-api-access-gdggz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.271366 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92e80556-5f2d-44ed-b165-3211fd50ad98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92e80556-5f2d-44ed-b165-3211fd50ad98" (UID: "92e80556-5f2d-44ed-b165-3211fd50ad98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.285246 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-9fvnr"] Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.304175 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92e80556-5f2d-44ed-b165-3211fd50ad98-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.304208 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdggz\" (UniqueName: \"kubernetes.io/projected/92e80556-5f2d-44ed-b165-3211fd50ad98-kube-api-access-gdggz\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.304220 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92e80556-5f2d-44ed-b165-3211fd50ad98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.318315 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-57775f7b86-mwzx9"] Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.331306 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-e6ca-account-create-update-wzznq"] Dec 05 01:35:36 crc kubenswrapper[4990]: E1205 01:35:36.332711 4990 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Dec 05 01:35:36 crc kubenswrapper[4990]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Dec 05 01:35:36 crc kubenswrapper[4990]: + source /usr/local/bin/container-scripts/functions Dec 05 01:35:36 crc kubenswrapper[4990]: ++ OVNBridge=br-int Dec 05 01:35:36 crc kubenswrapper[4990]: ++ OVNRemote=tcp:localhost:6642 Dec 05 01:35:36 crc kubenswrapper[4990]: ++ OVNEncapType=geneve Dec 05 01:35:36 crc kubenswrapper[4990]: ++ OVNAvailabilityZones= Dec 05 01:35:36 crc kubenswrapper[4990]: ++ EnableChassisAsGateway=true Dec 05 01:35:36 crc kubenswrapper[4990]: ++ PhysicalNetworks= Dec 05 01:35:36 crc kubenswrapper[4990]: ++ OVNHostName= Dec 05 01:35:36 crc kubenswrapper[4990]: ++ DB_FILE=/etc/openvswitch/conf.db Dec 05 01:35:36 crc kubenswrapper[4990]: ++ ovs_dir=/var/lib/openvswitch Dec 05 01:35:36 crc kubenswrapper[4990]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Dec 05 01:35:36 crc kubenswrapper[4990]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Dec 05 01:35:36 crc kubenswrapper[4990]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 05 01:35:36 crc kubenswrapper[4990]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 05 01:35:36 crc kubenswrapper[4990]: + sleep 0.5 Dec 05 01:35:36 crc kubenswrapper[4990]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 05 01:35:36 crc kubenswrapper[4990]: + sleep 0.5 Dec 05 01:35:36 crc kubenswrapper[4990]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 05 01:35:36 crc kubenswrapper[4990]: + cleanup_ovsdb_server_semaphore Dec 05 01:35:36 crc kubenswrapper[4990]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 05 01:35:36 crc kubenswrapper[4990]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Dec 05 01:35:36 crc kubenswrapper[4990]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-2j9fb" message=< Dec 05 01:35:36 crc kubenswrapper[4990]: Exiting ovsdb-server (5) [ OK ] Dec 05 01:35:36 crc kubenswrapper[4990]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Dec 05 01:35:36 crc kubenswrapper[4990]: + source /usr/local/bin/container-scripts/functions Dec 05 01:35:36 crc kubenswrapper[4990]: ++ OVNBridge=br-int Dec 05 01:35:36 crc kubenswrapper[4990]: ++ OVNRemote=tcp:localhost:6642 Dec 05 01:35:36 crc kubenswrapper[4990]: ++ OVNEncapType=geneve Dec 05 01:35:36 crc kubenswrapper[4990]: ++ OVNAvailabilityZones= Dec 05 01:35:36 crc kubenswrapper[4990]: ++ EnableChassisAsGateway=true Dec 05 01:35:36 crc kubenswrapper[4990]: ++ PhysicalNetworks= Dec 05 01:35:36 crc kubenswrapper[4990]: ++ OVNHostName= Dec 05 01:35:36 crc kubenswrapper[4990]: ++ DB_FILE=/etc/openvswitch/conf.db Dec 05 01:35:36 crc kubenswrapper[4990]: ++ ovs_dir=/var/lib/openvswitch Dec 05 01:35:36 crc kubenswrapper[4990]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Dec 05 01:35:36 crc kubenswrapper[4990]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Dec 05 01:35:36 crc kubenswrapper[4990]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 05 01:35:36 crc kubenswrapper[4990]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 05 01:35:36 crc kubenswrapper[4990]: + sleep 0.5 Dec 05 01:35:36 crc kubenswrapper[4990]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 05 01:35:36 crc kubenswrapper[4990]: + sleep 0.5 Dec 05 01:35:36 crc kubenswrapper[4990]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 05 01:35:36 crc kubenswrapper[4990]: + cleanup_ovsdb_server_semaphore Dec 05 01:35:36 crc kubenswrapper[4990]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 05 01:35:36 crc kubenswrapper[4990]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Dec 05 01:35:36 crc kubenswrapper[4990]: > Dec 05 01:35:36 crc kubenswrapper[4990]: E1205 01:35:36.332760 4990 kuberuntime_container.go:691] "PreStop hook failed" err=< Dec 05 01:35:36 crc kubenswrapper[4990]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Dec 05 01:35:36 crc kubenswrapper[4990]: + source /usr/local/bin/container-scripts/functions Dec 05 01:35:36 crc kubenswrapper[4990]: ++ OVNBridge=br-int Dec 05 01:35:36 crc kubenswrapper[4990]: ++ OVNRemote=tcp:localhost:6642 Dec 05 01:35:36 crc kubenswrapper[4990]: ++ OVNEncapType=geneve Dec 05 01:35:36 crc kubenswrapper[4990]: ++ OVNAvailabilityZones= Dec 05 01:35:36 crc kubenswrapper[4990]: ++ EnableChassisAsGateway=true Dec 05 01:35:36 crc kubenswrapper[4990]: ++ PhysicalNetworks= Dec 05 01:35:36 crc kubenswrapper[4990]: ++ OVNHostName= Dec 05 01:35:36 crc kubenswrapper[4990]: ++ DB_FILE=/etc/openvswitch/conf.db Dec 05 01:35:36 crc kubenswrapper[4990]: ++ ovs_dir=/var/lib/openvswitch Dec 05 01:35:36 crc kubenswrapper[4990]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Dec 05 01:35:36 crc kubenswrapper[4990]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Dec 05 01:35:36 crc kubenswrapper[4990]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 05 01:35:36 crc kubenswrapper[4990]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 05 01:35:36 crc kubenswrapper[4990]: + sleep 0.5 Dec 05 01:35:36 crc kubenswrapper[4990]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 05 01:35:36 crc kubenswrapper[4990]: + sleep 0.5 Dec 05 01:35:36 crc kubenswrapper[4990]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 05 01:35:36 crc kubenswrapper[4990]: + cleanup_ovsdb_server_semaphore Dec 05 01:35:36 crc kubenswrapper[4990]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 05 01:35:36 crc kubenswrapper[4990]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Dec 05 01:35:36 crc kubenswrapper[4990]: > pod="openstack/ovn-controller-ovs-2j9fb" podUID="d833c1a0-9e88-4ad3-8bcc-5904d459903a" containerName="ovsdb-server" containerID="cri-o://edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.332793 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-2j9fb" podUID="d833c1a0-9e88-4ad3-8bcc-5904d459903a" containerName="ovsdb-server" containerID="cri-o://edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940" gracePeriod=29 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.340836 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-67b9d4ffcb-hc2dk"] Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.341067 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-67b9d4ffcb-hc2dk" podUID="fecef393-81c1-4d16-af9e-3d777782dd2f" containerName="barbican-keystone-listener-log" containerID="cri-o://f737002671cee197ff06fe034d7cff773e243d6fb39490186391618718dde0ca" gracePeriod=30 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.341217 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-67b9d4ffcb-hc2dk" podUID="fecef393-81c1-4d16-af9e-3d777782dd2f" containerName="barbican-keystone-listener" containerID="cri-o://861c7f58bd75979689aba5728fe607c3673e66e27dce3331238271fb617060bd" gracePeriod=30 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.354452 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-7599ccc789-q6ldt"] Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.364827 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-557cdcfdf5-b7n8x"] Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.365119 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-557cdcfdf5-b7n8x" podUID="ba876d22-269d-46e3-8a91-24c8646d1c75" containerName="barbican-worker-log" containerID="cri-o://fb190cc3a575d8d3e5ae585358a9c8f90d298cdbc882ba5af1329dfb10d6b5c0" gracePeriod=30 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.365287 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-557cdcfdf5-b7n8x" podUID="ba876d22-269d-46e3-8a91-24c8646d1c75" containerName="barbican-worker" containerID="cri-o://acbdcf83ea752767a3b017fe9de23c7e771233760249604ee1ef047acd8ec3f1" gracePeriod=30 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.371650 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-74bb84bc86-8krlf"] Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.371932 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-74bb84bc86-8krlf" podUID="4489a490-bacc-498c-b0e3-d6b5cad13d34" containerName="barbican-api-log" containerID="cri-o://57746098ad306cb3039f8ee75ec4203722dcaf04ca8ec10ec4bae99de921040a" gracePeriod=30 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.372380 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-74bb84bc86-8krlf" podUID="4489a490-bacc-498c-b0e3-d6b5cad13d34" containerName="barbican-api" containerID="cri-o://6ddb956e1bc6923210a8635165e707d600786748db6d8f83199e419eecac98d4" gracePeriod=30 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.387611 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-78f948dd74-zmh7q"] Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.395551 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.395805 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="4182c8b1-5c4d-4f6b-aeca-9492abf6069e" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://36b542039ff8a88aa766414141937ec39af2f0202d964f9ecf53fabdb9300f96" gracePeriod=30 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.416707 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.427270 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.427496 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0dc80822-8cd5-4004-abdd-160ad6dcdd72" containerName="nova-scheduler-scheduler" containerID="cri-o://8eb62300cc3ccbd37e11d39589f93dfecbfed82d1d1d22eb835f940823d41073" gracePeriod=30 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.452867 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gmxzd"] Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.465092 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-2j9fb" podUID="d833c1a0-9e88-4ad3-8bcc-5904d459903a" containerName="ovs-vswitchd" containerID="cri-o://88c86c95e39217d930a01ef924d917927e9b97fa3c53963b2fe430bae34fff01" gracePeriod=29 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.467967 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.468213 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="426a0569-3dcd-4f28-9556-d4be5f1bdc18" containerName="nova-cell0-conductor-conductor" containerID="cri-o://4c1a4d82fc529e57f60fa033b1bfcb08aa397f1149fd6f70c841640e8abdbde8" gracePeriod=30 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.474542 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92e80556-5f2d-44ed-b165-3211fd50ad98-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "92e80556-5f2d-44ed-b165-3211fd50ad98" (UID: "92e80556-5f2d-44ed-b165-3211fd50ad98"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.481149 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gmxzd"] Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.484226 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="809c1920-3205-411c-a8c1-ed027b7e3b1f" containerName="rabbitmq" containerID="cri-o://39a3ea367ecbac2fdb7b56ed37380e3e71e8af696eebed8fe12028523b333328" gracePeriod=604800 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.492663 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.492910 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="23fef2f1-b3e2-4d6f-8beb-efd01386d758" containerName="nova-cell1-conductor-conductor" containerID="cri-o://99e3f4b0483634358a7d1235ce5eb8570a7f7ed07fa299cea8aa4652c97c14e8" gracePeriod=30 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.503965 4990 generic.go:334] "Generic (PLEG): container finished" podID="ba3c2a5d-0bec-4905-8cba-d0e565643fe7" containerID="48ca7bcf7c508929d069e5d6224db21799fee57e82ffabebd9ac3f8157e41ad0" exitCode=143 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.504039 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ba3c2a5d-0bec-4905-8cba-d0e565643fe7","Type":"ContainerDied","Data":"48ca7bcf7c508929d069e5d6224db21799fee57e82ffabebd9ac3f8157e41ad0"} Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.513385 4990 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/92e80556-5f2d-44ed-b165-3211fd50ad98-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.514228 4990 generic.go:334] "Generic (PLEG): container finished" podID="a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e" containerID="8d671cb47e36755ea226221a2086110a02ed86ebb73f448d8147ccab642c1517" exitCode=0 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.514259 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-w4dbr" event={"ID":"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e","Type":"ContainerDied","Data":"8d671cb47e36755ea226221a2086110a02ed86ebb73f448d8147ccab642c1517"} Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.521051 4990 generic.go:334] "Generic (PLEG): container finished" podID="5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d" containerID="d20b6d3367c7e8bc8e6b2c77a261707a5e35a27706e2fb7941de8c18a86ffb76" exitCode=143 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.521118 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d","Type":"ContainerDied","Data":"d20b6d3367c7e8bc8e6b2c77a261707a5e35a27706e2fb7941de8c18a86ffb76"} Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.531560 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nnw59"] Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.545305 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nnw59"] Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.549770 4990 generic.go:334] "Generic (PLEG): container finished" podID="60d8e2e9-244e-48b4-b99f-2606dc492482" containerID="ce448fae1000038dc6291f65727caed20202326c1f6236af230b2af7ffcb78b2" exitCode=0 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.549833 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7fdcd7bc79-skn69" event={"ID":"60d8e2e9-244e-48b4-b99f-2606dc492482","Type":"ContainerDied","Data":"ce448fae1000038dc6291f65727caed20202326c1f6236af230b2af7ffcb78b2"} Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.588034 4990 generic.go:334] "Generic (PLEG): container finished" podID="d833c1a0-9e88-4ad3-8bcc-5904d459903a" containerID="edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940" exitCode=0 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.588096 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2j9fb" event={"ID":"d833c1a0-9e88-4ad3-8bcc-5904d459903a","Type":"ContainerDied","Data":"edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940"} Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.606220 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-w4dbr" Dec 05 01:35:36 crc kubenswrapper[4990]: E1205 01:35:36.617190 4990 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Dec 05 01:35:36 crc kubenswrapper[4990]: E1205 01:35:36.617247 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/809c1920-3205-411c-a8c1-ed027b7e3b1f-config-data podName:809c1920-3205-411c-a8c1-ed027b7e3b1f nodeName:}" failed. No retries permitted until 2025-12-05 01:35:38.617231364 +0000 UTC m=+1636.993446725 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/809c1920-3205-411c-a8c1-ed027b7e3b1f-config-data") pod "rabbitmq-server-0" (UID: "809c1920-3205-411c-a8c1-ed027b7e3b1f") : configmap "rabbitmq-config-data" not found Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.666790 4990 generic.go:334] "Generic (PLEG): container finished" podID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerID="4ae06d118eb8ea46cbfca6d00f42445a070e48fa9ccd0ad829f106c51d8ef194" exitCode=0 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.666833 4990 generic.go:334] "Generic (PLEG): container finished" podID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerID="6de70efde02107c9fbb1a95e593fa59d651e0b070d0fd80a09b5347e7584c5bb" exitCode=0 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.666843 4990 generic.go:334] "Generic (PLEG): container finished" podID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerID="0680ec023c6468307885df08d2ac4f1cac0cb88e5ec79fee584fd1c1afbf5efa" exitCode=0 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.666849 4990 generic.go:334] "Generic (PLEG): container finished" podID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerID="f2ff93fb8c7c454e35b2bf406258dfcfba4639cb5f8814a7a599d53112389d47" exitCode=0 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.666856 4990 generic.go:334] "Generic (PLEG): container finished" podID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerID="26362245e507d8670eba99034d134400bd3ec982033714c91eb70c99f5853337" exitCode=0 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.666862 4990 generic.go:334] "Generic (PLEG): container finished" podID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerID="4fd758b87cec247eba81ade50034153cdcbbc7f7c87bfcd602d23cb9ea1e04cd" exitCode=0 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.666871 4990 generic.go:334] "Generic (PLEG): container finished" podID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerID="78b96b7b027c73093d95b4e9f8ab42d62565ca6239843604ae8aaba9db2a71e8" exitCode=0 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.666877 4990 generic.go:334] "Generic (PLEG): container finished" podID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerID="c71d8495c570c44e8d0d8fce230714f4e17b680d377288202cbad2a9a0e42974" exitCode=0 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.666883 4990 generic.go:334] "Generic (PLEG): container finished" podID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerID="9fa775c9f947165ed3ddfe5e6cdc672dfd05e2ff806586425cc43ed13294c90e" exitCode=0 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.666889 4990 generic.go:334] "Generic (PLEG): container finished" podID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerID="d31ccce1b91f8a9a540e5a730943d1aae099c0fe3faf11f048b29a08c7624223" exitCode=0 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.666895 4990 generic.go:334] "Generic (PLEG): container finished" podID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerID="9c1833b8d187a25564fdfc86973fed86449ef89dbe4fd79facc4c862e3f434b1" exitCode=0 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.666900 4990 generic.go:334] "Generic (PLEG): container finished" podID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerID="ca0beff3bc28af2d1932bdaaacb1378a2e2c8bdb17dbe086bdacfa5a6b716268" exitCode=0 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.666906 4990 generic.go:334] "Generic (PLEG): container finished" podID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerID="d560bf232fe1bd37741c0fe18914a2b867d18453e94b3688a301eb8bd39760b1" exitCode=0 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.666912 4990 generic.go:334] "Generic (PLEG): container finished" podID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerID="8eae0c84a22f2ce5fdda06bbd55b6ab37c6416cffdda260c5a8481964de6b976" exitCode=0 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.666978 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3","Type":"ContainerDied","Data":"4ae06d118eb8ea46cbfca6d00f42445a070e48fa9ccd0ad829f106c51d8ef194"} Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.667003 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3","Type":"ContainerDied","Data":"6de70efde02107c9fbb1a95e593fa59d651e0b070d0fd80a09b5347e7584c5bb"} Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.667014 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3","Type":"ContainerDied","Data":"0680ec023c6468307885df08d2ac4f1cac0cb88e5ec79fee584fd1c1afbf5efa"} Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.667023 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3","Type":"ContainerDied","Data":"f2ff93fb8c7c454e35b2bf406258dfcfba4639cb5f8814a7a599d53112389d47"} Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.667031 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3","Type":"ContainerDied","Data":"26362245e507d8670eba99034d134400bd3ec982033714c91eb70c99f5853337"} Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.667039 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3","Type":"ContainerDied","Data":"4fd758b87cec247eba81ade50034153cdcbbc7f7c87bfcd602d23cb9ea1e04cd"} Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.667047 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3","Type":"ContainerDied","Data":"78b96b7b027c73093d95b4e9f8ab42d62565ca6239843604ae8aaba9db2a71e8"} Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.667055 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3","Type":"ContainerDied","Data":"c71d8495c570c44e8d0d8fce230714f4e17b680d377288202cbad2a9a0e42974"} Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.667064 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3","Type":"ContainerDied","Data":"9fa775c9f947165ed3ddfe5e6cdc672dfd05e2ff806586425cc43ed13294c90e"} Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.667072 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3","Type":"ContainerDied","Data":"d31ccce1b91f8a9a540e5a730943d1aae099c0fe3faf11f048b29a08c7624223"} Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.667080 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3","Type":"ContainerDied","Data":"9c1833b8d187a25564fdfc86973fed86449ef89dbe4fd79facc4c862e3f434b1"} Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.667090 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3","Type":"ContainerDied","Data":"ca0beff3bc28af2d1932bdaaacb1378a2e2c8bdb17dbe086bdacfa5a6b716268"} Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.667098 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3","Type":"ContainerDied","Data":"d560bf232fe1bd37741c0fe18914a2b867d18453e94b3688a301eb8bd39760b1"} Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.667106 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3","Type":"ContainerDied","Data":"8eae0c84a22f2ce5fdda06bbd55b6ab37c6416cffdda260c5a8481964de6b976"} Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.702990 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-gch4g_92e80556-5f2d-44ed-b165-3211fd50ad98/openstack-network-exporter/0.log" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.703159 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-gch4g" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.703614 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-gch4g" event={"ID":"92e80556-5f2d-44ed-b165-3211fd50ad98","Type":"ContainerDied","Data":"a7a11082a66fecab87b3288c6d11916a97e6105c2ce7e93782df7d9596c3b900"} Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.703673 4990 scope.go:117] "RemoveContainer" containerID="c2e8be780f912a39bffb8d62d08b624b97b1904a55d0be61a24accc3f6874ef6" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.719468 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-ovsdbserver-nb\") pod \"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e\" (UID: \"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e\") " Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.719565 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-config\") pod \"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e\" (UID: \"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e\") " Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.719665 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-dns-swift-storage-0\") pod \"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e\" (UID: \"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e\") " Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.719754 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-ovsdbserver-sb\") pod \"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e\" (UID: \"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e\") " Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.719818 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-dns-svc\") pod \"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e\" (UID: \"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e\") " Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.719841 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6ngd\" (UniqueName: \"kubernetes.io/projected/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-kube-api-access-b6ngd\") pod \"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e\" (UID: \"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e\") " Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.722828 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f4475723-8c01-483c-991d-d686c6361021/ovsdbserver-sb/0.log" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.722861 4990 generic.go:334] "Generic (PLEG): container finished" podID="f4475723-8c01-483c-991d-d686c6361021" containerID="bad858e159bf07ff0d3caac7ac673c248bb0725f3fd9fe9254369591a53861cf" exitCode=2 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.722877 4990 generic.go:334] "Generic (PLEG): container finished" podID="f4475723-8c01-483c-991d-d686c6361021" containerID="eba3c2e58a11f77331043a8b651d18ec1dbce27ca05b7e23324f354e0f09b319" exitCode=143 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.723074 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f4475723-8c01-483c-991d-d686c6361021","Type":"ContainerDied","Data":"bad858e159bf07ff0d3caac7ac673c248bb0725f3fd9fe9254369591a53861cf"} Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.723100 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f4475723-8c01-483c-991d-d686c6361021","Type":"ContainerDied","Data":"eba3c2e58a11f77331043a8b651d18ec1dbce27ca05b7e23324f354e0f09b319"} Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.742099 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-kube-api-access-b6ngd" (OuterVolumeSpecName: "kube-api-access-b6ngd") pod "a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e" (UID: "a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e"). InnerVolumeSpecName "kube-api-access-b6ngd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.749918 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-gch4g"] Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.757254 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-gch4g"] Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.781032 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7c137d1b-6433-40ac-8036-84313eef1967/ovsdbserver-nb/0.log" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.781100 4990 generic.go:334] "Generic (PLEG): container finished" podID="7c137d1b-6433-40ac-8036-84313eef1967" containerID="31641ed3f43c47e4a2506c3dcbee8f9106ba3dd06dc5c1aae7406895b483339c" exitCode=2 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.781121 4990 generic.go:334] "Generic (PLEG): container finished" podID="7c137d1b-6433-40ac-8036-84313eef1967" containerID="d669b98ffce9d4d7e245b409d79ed12266001924abaaf1749664c34bb9dcf1d8" exitCode=143 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.781206 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7c137d1b-6433-40ac-8036-84313eef1967","Type":"ContainerDied","Data":"31641ed3f43c47e4a2506c3dcbee8f9106ba3dd06dc5c1aae7406895b483339c"} Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.781255 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7c137d1b-6433-40ac-8036-84313eef1967","Type":"ContainerDied","Data":"d669b98ffce9d4d7e245b409d79ed12266001924abaaf1749664c34bb9dcf1d8"} Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.786708 4990 generic.go:334] "Generic (PLEG): container finished" podID="82eb03c9-869c-447d-9b78-b4ef916b59ac" containerID="60103cd0845a8ee4ac3a9f4a4b4913b499c5becaac1a1bf97f551b44867160a5" exitCode=143 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.786756 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c5f858c6d-zxwsh" event={"ID":"82eb03c9-869c-447d-9b78-b4ef916b59ac","Type":"ContainerDied","Data":"60103cd0845a8ee4ac3a9f4a4b4913b499c5becaac1a1bf97f551b44867160a5"} Dec 05 01:35:36 crc kubenswrapper[4990]: E1205 01:35:36.788830 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="77eda99de79f1606c252cb06ece67f2f6f226ccf89000f0de068f41aaab2a00c" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 05 01:35:36 crc kubenswrapper[4990]: E1205 01:35:36.805369 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="77eda99de79f1606c252cb06ece67f2f6f226ccf89000f0de068f41aaab2a00c" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.821537 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6ngd\" (UniqueName: \"kubernetes.io/projected/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-kube-api-access-b6ngd\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.843080 4990 generic.go:334] "Generic (PLEG): container finished" podID="9ca5e656-876c-4e87-b049-5c284b211804" containerID="7fa77ce9f772e359c9e72bc3678b54f2352122e3c3b7171d426cd33a61531716" exitCode=0 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.843149 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9ca5e656-876c-4e87-b049-5c284b211804","Type":"ContainerDied","Data":"7fa77ce9f772e359c9e72bc3678b54f2352122e3c3b7171d426cd33a61531716"} Dec 05 01:35:36 crc kubenswrapper[4990]: E1205 01:35:36.845001 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="77eda99de79f1606c252cb06ece67f2f6f226ccf89000f0de068f41aaab2a00c" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 05 01:35:36 crc kubenswrapper[4990]: E1205 01:35:36.845145 4990 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="1847f2cb-e2fb-4dc0-8f4b-bf6e43212454" containerName="ovn-northd" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.845594 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f4475723-8c01-483c-991d-d686c6361021/ovsdbserver-sb/0.log" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.845683 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.871614 4990 generic.go:334] "Generic (PLEG): container finished" podID="47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3" containerID="82eb58ebe7ffe6cca157c0c411fe2fb1cb6d998e427d96fff54c39ba5fae459b" exitCode=143 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.871905 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3","Type":"ContainerDied","Data":"82eb58ebe7ffe6cca157c0c411fe2fb1cb6d998e427d96fff54c39ba5fae459b"} Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.874074 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e" (UID: "a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.886134 4990 generic.go:334] "Generic (PLEG): container finished" podID="4b5ac2be-fc48-4bde-a668-b3549462a101" containerID="a6c8921d8a3c62aed69725cab2690e2cd5481b1e2d2f6d5631d6f5f5be266d43" exitCode=143 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.886178 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b5ac2be-fc48-4bde-a668-b3549462a101","Type":"ContainerDied","Data":"a6c8921d8a3c62aed69725cab2690e2cd5481b1e2d2f6d5631d6f5f5be266d43"} Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.894100 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e" (UID: "a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.906789 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="2c281c58-a95e-4669-bdfc-465759817928" containerName="galera" containerID="cri-o://d296b3f03577d11daa223c38effcb4c833eb5944cd5256ee247941d30a2772a5" gracePeriod=30 Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.924790 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"f4475723-8c01-483c-991d-d686c6361021\" (UID: \"f4475723-8c01-483c-991d-d686c6361021\") " Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.924850 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhw4z\" (UniqueName: \"kubernetes.io/projected/f4475723-8c01-483c-991d-d686c6361021-kube-api-access-mhw4z\") pod \"f4475723-8c01-483c-991d-d686c6361021\" (UID: \"f4475723-8c01-483c-991d-d686c6361021\") " Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.924924 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4475723-8c01-483c-991d-d686c6361021-metrics-certs-tls-certs\") pod \"f4475723-8c01-483c-991d-d686c6361021\" (UID: \"f4475723-8c01-483c-991d-d686c6361021\") " Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.924963 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4475723-8c01-483c-991d-d686c6361021-ovsdb-rundir\") pod \"f4475723-8c01-483c-991d-d686c6361021\" (UID: \"f4475723-8c01-483c-991d-d686c6361021\") " Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.925015 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4475723-8c01-483c-991d-d686c6361021-scripts\") pod \"f4475723-8c01-483c-991d-d686c6361021\" (UID: \"f4475723-8c01-483c-991d-d686c6361021\") " Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.925080 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4475723-8c01-483c-991d-d686c6361021-combined-ca-bundle\") pod \"f4475723-8c01-483c-991d-d686c6361021\" (UID: \"f4475723-8c01-483c-991d-d686c6361021\") " Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.925150 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4475723-8c01-483c-991d-d686c6361021-config\") pod \"f4475723-8c01-483c-991d-d686c6361021\" (UID: \"f4475723-8c01-483c-991d-d686c6361021\") " Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.925181 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4475723-8c01-483c-991d-d686c6361021-ovsdbserver-sb-tls-certs\") pod \"f4475723-8c01-483c-991d-d686c6361021\" (UID: \"f4475723-8c01-483c-991d-d686c6361021\") " Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.927105 4990 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.927198 4990 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:36 crc kubenswrapper[4990]: E1205 01:35:36.927367 4990 secret.go:188] Couldn't get secret openstack/barbican-api-config-data: secret "barbican-api-config-data" not found Dec 05 01:35:36 crc kubenswrapper[4990]: E1205 01:35:36.927590 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-config-data-custom podName:2c7241f3-92bb-4295-97d9-4284784b11f3 nodeName:}" failed. No retries permitted until 2025-12-05 01:35:38.927569126 +0000 UTC m=+1637.303784487 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-config-data-custom") pod "barbican-api-78f948dd74-zmh7q" (UID: "2c7241f3-92bb-4295-97d9-4284784b11f3") : secret "barbican-api-config-data" not found Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.928187 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4475723-8c01-483c-991d-d686c6361021-scripts" (OuterVolumeSpecName: "scripts") pod "f4475723-8c01-483c-991d-d686c6361021" (UID: "f4475723-8c01-483c-991d-d686c6361021"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.929037 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4475723-8c01-483c-991d-d686c6361021-config" (OuterVolumeSpecName: "config") pod "f4475723-8c01-483c-991d-d686c6361021" (UID: "f4475723-8c01-483c-991d-d686c6361021"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:36 crc kubenswrapper[4990]: E1205 01:35:36.929112 4990 secret.go:188] Couldn't get secret openstack/barbican-config-data: secret "barbican-config-data" not found Dec 05 01:35:36 crc kubenswrapper[4990]: E1205 01:35:36.929167 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-config-data podName:2c7241f3-92bb-4295-97d9-4284784b11f3 nodeName:}" failed. No retries permitted until 2025-12-05 01:35:38.92914839 +0000 UTC m=+1637.305363751 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-config-data") pod "barbican-api-78f948dd74-zmh7q" (UID: "2c7241f3-92bb-4295-97d9-4284784b11f3") : secret "barbican-config-data" not found Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.929348 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4475723-8c01-483c-991d-d686c6361021-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "f4475723-8c01-483c-991d-d686c6361021" (UID: "f4475723-8c01-483c-991d-d686c6361021"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.932512 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4475723-8c01-483c-991d-d686c6361021-kube-api-access-mhw4z" (OuterVolumeSpecName: "kube-api-access-mhw4z") pod "f4475723-8c01-483c-991d-d686c6361021" (UID: "f4475723-8c01-483c-991d-d686c6361021"). InnerVolumeSpecName "kube-api-access-mhw4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.938625 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "f4475723-8c01-483c-991d-d686c6361021" (UID: "f4475723-8c01-483c-991d-d686c6361021"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.948998 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e" (UID: "a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.965111 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-config" (OuterVolumeSpecName: "config") pod "a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e" (UID: "a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:36 crc kubenswrapper[4990]: I1205 01:35:36.976022 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e" (UID: "a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:36 crc kubenswrapper[4990]: E1205 01:35:36.993419 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4c1a4d82fc529e57f60fa033b1bfcb08aa397f1149fd6f70c841640e8abdbde8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 05 01:35:36 crc kubenswrapper[4990]: E1205 01:35:36.996424 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4c1a4d82fc529e57f60fa033b1bfcb08aa397f1149fd6f70c841640e8abdbde8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 05 01:35:37 crc kubenswrapper[4990]: E1205 01:35:37.000137 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4c1a4d82fc529e57f60fa033b1bfcb08aa397f1149fd6f70c841640e8abdbde8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 05 01:35:37 crc kubenswrapper[4990]: E1205 01:35:37.000379 4990 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="426a0569-3dcd-4f28-9556-d4be5f1bdc18" containerName="nova-cell0-conductor-conductor" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.000339 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4475723-8c01-483c-991d-d686c6361021-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4475723-8c01-483c-991d-d686c6361021" (UID: "f4475723-8c01-483c-991d-d686c6361021"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.030922 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4475723-8c01-483c-991d-d686c6361021-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.031128 4990 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.031205 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4475723-8c01-483c-991d-d686c6361021-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.031394 4990 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.031475 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.031587 4990 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.031662 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhw4z\" (UniqueName: \"kubernetes.io/projected/f4475723-8c01-483c-991d-d686c6361021-kube-api-access-mhw4z\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.031736 4990 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4475723-8c01-483c-991d-d686c6361021-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.031811 4990 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4475723-8c01-483c-991d-d686c6361021-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.066862 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4475723-8c01-483c-991d-d686c6361021-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "f4475723-8c01-483c-991d-d686c6361021" (UID: "f4475723-8c01-483c-991d-d686c6361021"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.066931 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4475723-8c01-483c-991d-d686c6361021-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "f4475723-8c01-483c-991d-d686c6361021" (UID: "f4475723-8c01-483c-991d-d686c6361021"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.079170 4990 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.133749 4990 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.133799 4990 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4475723-8c01-483c-991d-d686c6361021-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.133811 4990 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4475723-8c01-483c-991d-d686c6361021-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.165510 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7c137d1b-6433-40ac-8036-84313eef1967/ovsdbserver-nb/0.log" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.165848 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.234723 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c137d1b-6433-40ac-8036-84313eef1967-ovsdbserver-nb-tls-certs\") pod \"7c137d1b-6433-40ac-8036-84313eef1967\" (UID: \"7c137d1b-6433-40ac-8036-84313eef1967\") " Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.234760 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c137d1b-6433-40ac-8036-84313eef1967-config\") pod \"7c137d1b-6433-40ac-8036-84313eef1967\" (UID: \"7c137d1b-6433-40ac-8036-84313eef1967\") " Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.234837 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf7v7\" (UniqueName: \"kubernetes.io/projected/7c137d1b-6433-40ac-8036-84313eef1967-kube-api-access-wf7v7\") pod \"7c137d1b-6433-40ac-8036-84313eef1967\" (UID: \"7c137d1b-6433-40ac-8036-84313eef1967\") " Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.234884 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7c137d1b-6433-40ac-8036-84313eef1967-ovsdb-rundir\") pod \"7c137d1b-6433-40ac-8036-84313eef1967\" (UID: \"7c137d1b-6433-40ac-8036-84313eef1967\") " Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.234952 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"7c137d1b-6433-40ac-8036-84313eef1967\" (UID: \"7c137d1b-6433-40ac-8036-84313eef1967\") " Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.234973 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c137d1b-6433-40ac-8036-84313eef1967-scripts\") pod \"7c137d1b-6433-40ac-8036-84313eef1967\" (UID: \"7c137d1b-6433-40ac-8036-84313eef1967\") " Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.235013 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c137d1b-6433-40ac-8036-84313eef1967-metrics-certs-tls-certs\") pod \"7c137d1b-6433-40ac-8036-84313eef1967\" (UID: \"7c137d1b-6433-40ac-8036-84313eef1967\") " Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.235071 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c137d1b-6433-40ac-8036-84313eef1967-combined-ca-bundle\") pod \"7c137d1b-6433-40ac-8036-84313eef1967\" (UID: \"7c137d1b-6433-40ac-8036-84313eef1967\") " Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.236899 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c137d1b-6433-40ac-8036-84313eef1967-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "7c137d1b-6433-40ac-8036-84313eef1967" (UID: "7c137d1b-6433-40ac-8036-84313eef1967"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.237287 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c137d1b-6433-40ac-8036-84313eef1967-scripts" (OuterVolumeSpecName: "scripts") pod "7c137d1b-6433-40ac-8036-84313eef1967" (UID: "7c137d1b-6433-40ac-8036-84313eef1967"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.238808 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c137d1b-6433-40ac-8036-84313eef1967-config" (OuterVolumeSpecName: "config") pod "7c137d1b-6433-40ac-8036-84313eef1967" (UID: "7c137d1b-6433-40ac-8036-84313eef1967"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.241343 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-78f948dd74-zmh7q"] Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.246030 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "7c137d1b-6433-40ac-8036-84313eef1967" (UID: "7c137d1b-6433-40ac-8036-84313eef1967"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.246095 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c137d1b-6433-40ac-8036-84313eef1967-kube-api-access-wf7v7" (OuterVolumeSpecName: "kube-api-access-wf7v7") pod "7c137d1b-6433-40ac-8036-84313eef1967" (UID: "7c137d1b-6433-40ac-8036-84313eef1967"). InnerVolumeSpecName "kube-api-access-wf7v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:37 crc kubenswrapper[4990]: W1205 01:35:37.252646 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c7241f3_92bb_4295_97d9_4284784b11f3.slice/crio-257b886219f341cd87d2cad98bf72ca9cfa0560f415b9de2c61e1e4cd8739785 WatchSource:0}: Error finding container 257b886219f341cd87d2cad98bf72ca9cfa0560f415b9de2c61e1e4cd8739785: Status 404 returned error can't find the container with id 257b886219f341cd87d2cad98bf72ca9cfa0560f415b9de2c61e1e4cd8739785 Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.268727 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c137d1b-6433-40ac-8036-84313eef1967-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c137d1b-6433-40ac-8036-84313eef1967" (UID: "7c137d1b-6433-40ac-8036-84313eef1967"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.339694 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf7v7\" (UniqueName: \"kubernetes.io/projected/7c137d1b-6433-40ac-8036-84313eef1967-kube-api-access-wf7v7\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.339724 4990 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7c137d1b-6433-40ac-8036-84313eef1967-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.339752 4990 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.339761 4990 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c137d1b-6433-40ac-8036-84313eef1967-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.339772 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c137d1b-6433-40ac-8036-84313eef1967-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.339783 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c137d1b-6433-40ac-8036-84313eef1967-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.366613 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-7599ccc789-q6ldt"] Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.373309 4990 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.373828 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-57775f7b86-mwzx9"] Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.375388 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c137d1b-6433-40ac-8036-84313eef1967-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "7c137d1b-6433-40ac-8036-84313eef1967" (UID: "7c137d1b-6433-40ac-8036-84313eef1967"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:37 crc kubenswrapper[4990]: W1205 01:35:37.390615 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode94da38c_b2d3_4ddb_b032_a6e5bfa62145.slice/crio-1bfa86a2a543e49db9ce70b25b2c9ee336180c27c553d5a9fe316d9a59f914a0 WatchSource:0}: Error finding container 1bfa86a2a543e49db9ce70b25b2c9ee336180c27c553d5a9fe316d9a59f914a0: Status 404 returned error can't find the container with id 1bfa86a2a543e49db9ce70b25b2c9ee336180c27c553d5a9fe316d9a59f914a0 Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.401725 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c137d1b-6433-40ac-8036-84313eef1967-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "7c137d1b-6433-40ac-8036-84313eef1967" (UID: "7c137d1b-6433-40ac-8036-84313eef1967"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.441852 4990 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c137d1b-6433-40ac-8036-84313eef1967-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.441884 4990 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c137d1b-6433-40ac-8036-84313eef1967-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.441894 4990 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.602730 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-84997d8dc-hzdlp"] Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.602936 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-84997d8dc-hzdlp" podUID="bb029546-9d20-445a-9926-2a43c235a755" containerName="proxy-httpd" containerID="cri-o://7a443fba291ecdcac0e6448acb29bbf33b4870fc7e1f4e55df79dc991c5a9578" gracePeriod=30 Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.603308 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-84997d8dc-hzdlp" podUID="bb029546-9d20-445a-9926-2a43c235a755" containerName="proxy-server" containerID="cri-o://16fbd3724b89f938c175ee9ce25a3436b47a2fa6ff90036a1c918e6369ba703e" gracePeriod=30 Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.756228 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.786123 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.790108 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.851714 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmb48\" (UniqueName: \"kubernetes.io/projected/9ca5e656-876c-4e87-b049-5c284b211804-kube-api-access-hmb48\") pod \"9ca5e656-876c-4e87-b049-5c284b211804\" (UID: \"9ca5e656-876c-4e87-b049-5c284b211804\") " Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.851879 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f19ad196-b05b-4ade-ba2b-3b532d447f8e-openstack-config-secret\") pod \"f19ad196-b05b-4ade-ba2b-3b532d447f8e\" (UID: \"f19ad196-b05b-4ade-ba2b-3b532d447f8e\") " Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.852843 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ca5e656-876c-4e87-b049-5c284b211804-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9ca5e656-876c-4e87-b049-5c284b211804" (UID: "9ca5e656-876c-4e87-b049-5c284b211804"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.857250 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ca5e656-876c-4e87-b049-5c284b211804-etc-machine-id\") pod \"9ca5e656-876c-4e87-b049-5c284b211804\" (UID: \"9ca5e656-876c-4e87-b049-5c284b211804\") " Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.857635 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f19ad196-b05b-4ade-ba2b-3b532d447f8e-openstack-config\") pod \"f19ad196-b05b-4ade-ba2b-3b532d447f8e\" (UID: \"f19ad196-b05b-4ade-ba2b-3b532d447f8e\") " Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.857670 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f19ad196-b05b-4ade-ba2b-3b532d447f8e-combined-ca-bundle\") pod \"f19ad196-b05b-4ade-ba2b-3b532d447f8e\" (UID: \"f19ad196-b05b-4ade-ba2b-3b532d447f8e\") " Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.860591 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ca5e656-876c-4e87-b049-5c284b211804-config-data-custom\") pod \"9ca5e656-876c-4e87-b049-5c284b211804\" (UID: \"9ca5e656-876c-4e87-b049-5c284b211804\") " Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.860646 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vdnw\" (UniqueName: \"kubernetes.io/projected/4182c8b1-5c4d-4f6b-aeca-9492abf6069e-kube-api-access-5vdnw\") pod \"4182c8b1-5c4d-4f6b-aeca-9492abf6069e\" (UID: \"4182c8b1-5c4d-4f6b-aeca-9492abf6069e\") " Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.860680 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4182c8b1-5c4d-4f6b-aeca-9492abf6069e-nova-novncproxy-tls-certs\") pod \"4182c8b1-5c4d-4f6b-aeca-9492abf6069e\" (UID: \"4182c8b1-5c4d-4f6b-aeca-9492abf6069e\") " Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.860730 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4182c8b1-5c4d-4f6b-aeca-9492abf6069e-config-data\") pod \"4182c8b1-5c4d-4f6b-aeca-9492abf6069e\" (UID: \"4182c8b1-5c4d-4f6b-aeca-9492abf6069e\") " Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.860763 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca5e656-876c-4e87-b049-5c284b211804-combined-ca-bundle\") pod \"9ca5e656-876c-4e87-b049-5c284b211804\" (UID: \"9ca5e656-876c-4e87-b049-5c284b211804\") " Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.860789 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ca5e656-876c-4e87-b049-5c284b211804-scripts\") pod \"9ca5e656-876c-4e87-b049-5c284b211804\" (UID: \"9ca5e656-876c-4e87-b049-5c284b211804\") " Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.860813 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tschq\" (UniqueName: \"kubernetes.io/projected/f19ad196-b05b-4ade-ba2b-3b532d447f8e-kube-api-access-tschq\") pod \"f19ad196-b05b-4ade-ba2b-3b532d447f8e\" (UID: \"f19ad196-b05b-4ade-ba2b-3b532d447f8e\") " Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.861362 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ca5e656-876c-4e87-b049-5c284b211804-config-data\") pod \"9ca5e656-876c-4e87-b049-5c284b211804\" (UID: \"9ca5e656-876c-4e87-b049-5c284b211804\") " Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.861544 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4182c8b1-5c4d-4f6b-aeca-9492abf6069e-vencrypt-tls-certs\") pod \"4182c8b1-5c4d-4f6b-aeca-9492abf6069e\" (UID: \"4182c8b1-5c4d-4f6b-aeca-9492abf6069e\") " Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.861745 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4182c8b1-5c4d-4f6b-aeca-9492abf6069e-combined-ca-bundle\") pod \"4182c8b1-5c4d-4f6b-aeca-9492abf6069e\" (UID: \"4182c8b1-5c4d-4f6b-aeca-9492abf6069e\") " Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.862593 4990 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ca5e656-876c-4e87-b049-5c284b211804-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.875291 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4182c8b1-5c4d-4f6b-aeca-9492abf6069e-kube-api-access-5vdnw" (OuterVolumeSpecName: "kube-api-access-5vdnw") pod "4182c8b1-5c4d-4f6b-aeca-9492abf6069e" (UID: "4182c8b1-5c4d-4f6b-aeca-9492abf6069e"). InnerVolumeSpecName "kube-api-access-5vdnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.875459 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ca5e656-876c-4e87-b049-5c284b211804-scripts" (OuterVolumeSpecName: "scripts") pod "9ca5e656-876c-4e87-b049-5c284b211804" (UID: "9ca5e656-876c-4e87-b049-5c284b211804"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.887090 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ca5e656-876c-4e87-b049-5c284b211804-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9ca5e656-876c-4e87-b049-5c284b211804" (UID: "9ca5e656-876c-4e87-b049-5c284b211804"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.890668 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f19ad196-b05b-4ade-ba2b-3b532d447f8e-kube-api-access-tschq" (OuterVolumeSpecName: "kube-api-access-tschq") pod "f19ad196-b05b-4ade-ba2b-3b532d447f8e" (UID: "f19ad196-b05b-4ade-ba2b-3b532d447f8e"). InnerVolumeSpecName "kube-api-access-tschq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.890810 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ca5e656-876c-4e87-b049-5c284b211804-kube-api-access-hmb48" (OuterVolumeSpecName: "kube-api-access-hmb48") pod "9ca5e656-876c-4e87-b049-5c284b211804" (UID: "9ca5e656-876c-4e87-b049-5c284b211804"). InnerVolumeSpecName "kube-api-access-hmb48". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.905427 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f4475723-8c01-483c-991d-d686c6361021/ovsdbserver-sb/0.log" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.905664 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f4475723-8c01-483c-991d-d686c6361021","Type":"ContainerDied","Data":"2fc8da6d992067e98eea1fc769162eea49a5a05ef7ee6c0e3a06815f331f4f96"} Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.905772 4990 scope.go:117] "RemoveContainer" containerID="bad858e159bf07ff0d3caac7ac673c248bb0725f3fd9fe9254369591a53861cf" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.905975 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.915879 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-w4dbr" event={"ID":"a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e","Type":"ContainerDied","Data":"31139e0b5d93e2ee80651bd099f04003f013897ab678065d078d7fa4dce9096c"} Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.916000 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-w4dbr" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.923734 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7c137d1b-6433-40ac-8036-84313eef1967/ovsdbserver-nb/0.log" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.923969 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.923960 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7c137d1b-6433-40ac-8036-84313eef1967","Type":"ContainerDied","Data":"11f37c0f8fa015d1e9a5a18a06886135da7bf7e528eaaa463e30429c118ec287"} Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.928024 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7599ccc789-q6ldt" event={"ID":"e94da38c-b2d3-4ddb-b032-a6e5bfa62145","Type":"ContainerStarted","Data":"1bfa86a2a543e49db9ce70b25b2c9ee336180c27c553d5a9fe316d9a59f914a0"} Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.928576 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57775f7b86-mwzx9" event={"ID":"0cfb17a8-ecc2-4fa8-85e0-439d19b01b97","Type":"ContainerStarted","Data":"3fea20119ef314b71415f0e498137e3f7e618ac787a6fbcf0557233df608d6ed"} Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.928613 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57775f7b86-mwzx9" event={"ID":"0cfb17a8-ecc2-4fa8-85e0-439d19b01b97","Type":"ContainerStarted","Data":"86aec915fd0bfaf2805f86bc873ff8e7f3dc44a12704927bf1f47c0f5a96003d"} Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.946471 4990 generic.go:334] "Generic (PLEG): container finished" podID="2c281c58-a95e-4669-bdfc-465759817928" containerID="d296b3f03577d11daa223c38effcb4c833eb5944cd5256ee247941d30a2772a5" exitCode=0 Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.946870 4990 scope.go:117] "RemoveContainer" containerID="eba3c2e58a11f77331043a8b651d18ec1dbce27ca05b7e23324f354e0f09b319" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.978711 4990 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ca5e656-876c-4e87-b049-5c284b211804-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.978746 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tschq\" (UniqueName: \"kubernetes.io/projected/f19ad196-b05b-4ade-ba2b-3b532d447f8e-kube-api-access-tschq\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.978760 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmb48\" (UniqueName: \"kubernetes.io/projected/9ca5e656-876c-4e87-b049-5c284b211804-kube-api-access-hmb48\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.978774 4990 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ca5e656-876c-4e87-b049-5c284b211804-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.978788 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vdnw\" (UniqueName: \"kubernetes.io/projected/4182c8b1-5c4d-4f6b-aeca-9492abf6069e-kube-api-access-5vdnw\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.985298 4990 generic.go:334] "Generic (PLEG): container finished" podID="4489a490-bacc-498c-b0e3-d6b5cad13d34" containerID="57746098ad306cb3039f8ee75ec4203722dcaf04ca8ec10ec4bae99de921040a" exitCode=143 Dec 05 01:35:37 crc kubenswrapper[4990]: I1205 01:35:37.988443 4990 generic.go:334] "Generic (PLEG): container finished" podID="fecef393-81c1-4d16-af9e-3d777782dd2f" containerID="f737002671cee197ff06fe034d7cff773e243d6fb39490186391618718dde0ca" exitCode=143 Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.030172 4990 generic.go:334] "Generic (PLEG): container finished" podID="bb029546-9d20-445a-9926-2a43c235a755" containerID="7a443fba291ecdcac0e6448acb29bbf33b4870fc7e1f4e55df79dc991c5a9578" exitCode=0 Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.035076 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="158b83e3-9326-447b-b100-3fc9f25383b2" path="/var/lib/kubelet/pods/158b83e3-9326-447b-b100-3fc9f25383b2/volumes" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.036372 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74ff5181-d9ea-4726-97f0-3d62935f4949" path="/var/lib/kubelet/pods/74ff5181-d9ea-4726-97f0-3d62935f4949/volumes" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.041405 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92882bc9-a33b-4129-82a3-9ea0900acece" path="/var/lib/kubelet/pods/92882bc9-a33b-4129-82a3-9ea0900acece/volumes" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.042100 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92e80556-5f2d-44ed-b165-3211fd50ad98" path="/var/lib/kubelet/pods/92e80556-5f2d-44ed-b165-3211fd50ad98/volumes" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.044802 4990 generic.go:334] "Generic (PLEG): container finished" podID="f19ad196-b05b-4ade-ba2b-3b532d447f8e" containerID="125c5c338aa8e9845fa2e8e4ed1dc5dbc6d155571a90f33a2bce9b3578bb0527" exitCode=137 Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.044951 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.046970 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc1f8c91-37fa-4816-9340-f6345f60a6cf" path="/var/lib/kubelet/pods/fc1f8c91-37fa-4816-9340-f6345f60a6cf/volumes" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.048216 4990 generic.go:334] "Generic (PLEG): container finished" podID="9ca5e656-876c-4e87-b049-5c284b211804" containerID="a95fb654a91d91d71d02b450e0f66b15586b768fefea64199189813f9b0c58c3" exitCode=0 Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.048341 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.052402 4990 generic.go:334] "Generic (PLEG): container finished" podID="4182c8b1-5c4d-4f6b-aeca-9492abf6069e" containerID="36b542039ff8a88aa766414141937ec39af2f0202d964f9ecf53fabdb9300f96" exitCode=0 Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.052513 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.075439 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutronc36c-account-delete-nt5gn"] Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.076625 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cindera4a4-account-delete-kpsxn"] Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.076653 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2c281c58-a95e-4669-bdfc-465759817928","Type":"ContainerDied","Data":"d296b3f03577d11daa223c38effcb4c833eb5944cd5256ee247941d30a2772a5"} Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.076681 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.076697 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74bb84bc86-8krlf" event={"ID":"4489a490-bacc-498c-b0e3-d6b5cad13d34","Type":"ContainerDied","Data":"57746098ad306cb3039f8ee75ec4203722dcaf04ca8ec10ec4bae99de921040a"} Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.076711 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-67b9d4ffcb-hc2dk" event={"ID":"fecef393-81c1-4d16-af9e-3d777782dd2f","Type":"ContainerDied","Data":"f737002671cee197ff06fe034d7cff773e243d6fb39490186391618718dde0ca"} Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.076727 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.076746 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement06fa-account-delete-fb874"] Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.076759 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-84997d8dc-hzdlp" event={"ID":"bb029546-9d20-445a-9926-2a43c235a755","Type":"ContainerDied","Data":"7a443fba291ecdcac0e6448acb29bbf33b4870fc7e1f4e55df79dc991c5a9578"} Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.076778 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9ca5e656-876c-4e87-b049-5c284b211804","Type":"ContainerDied","Data":"a95fb654a91d91d71d02b450e0f66b15586b768fefea64199189813f9b0c58c3"} Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.076790 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9ca5e656-876c-4e87-b049-5c284b211804","Type":"ContainerDied","Data":"569b2fa4a9d7035a8cf8d7d0f2ceeef59a09e8c13c6717e7f2cff84863304275"} Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.076802 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4182c8b1-5c4d-4f6b-aeca-9492abf6069e","Type":"ContainerDied","Data":"36b542039ff8a88aa766414141937ec39af2f0202d964f9ecf53fabdb9300f96"} Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.076814 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4182c8b1-5c4d-4f6b-aeca-9492abf6069e","Type":"ContainerDied","Data":"dbdf5d647f21e3b7db61caa3988c4057d728ad26a3e44ab1627d67a619ebeb7f"} Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.076827 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78f948dd74-zmh7q" event={"ID":"2c7241f3-92bb-4295-97d9-4284784b11f3","Type":"ContainerStarted","Data":"8b4a5c95a5bafa9fdced2d0ae48a57fdaa45a090e28899f2b4a2464fc1d5c263"} Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.076838 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78f948dd74-zmh7q" event={"ID":"2c7241f3-92bb-4295-97d9-4284784b11f3","Type":"ContainerStarted","Data":"257b886219f341cd87d2cad98bf72ca9cfa0560f415b9de2c61e1e4cd8739785"} Dec 05 01:35:38 crc kubenswrapper[4990]: E1205 01:35:38.080313 4990 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Dec 05 01:35:38 crc kubenswrapper[4990]: E1205 01:35:38.080369 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed473a7a-f068-49a3-ae4c-b57b39e33b28-config-data podName:ed473a7a-f068-49a3-ae4c-b57b39e33b28 nodeName:}" failed. No retries permitted until 2025-12-05 01:35:42.080352957 +0000 UTC m=+1640.456568318 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/ed473a7a-f068-49a3-ae4c-b57b39e33b28-config-data") pod "rabbitmq-cell1-server-0" (UID: "ed473a7a-f068-49a3-ae4c-b57b39e33b28") : configmap "rabbitmq-cell1-config-data" not found Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.082147 4990 generic.go:334] "Generic (PLEG): container finished" podID="ba876d22-269d-46e3-8a91-24c8646d1c75" containerID="fb190cc3a575d8d3e5ae585358a9c8f90d298cdbc882ba5af1329dfb10d6b5c0" exitCode=143 Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.082183 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-557cdcfdf5-b7n8x" event={"ID":"ba876d22-269d-46e3-8a91-24c8646d1c75","Type":"ContainerDied","Data":"fb190cc3a575d8d3e5ae585358a9c8f90d298cdbc882ba5af1329dfb10d6b5c0"} Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.086663 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.103039 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.112303 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f19ad196-b05b-4ade-ba2b-3b532d447f8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f19ad196-b05b-4ade-ba2b-3b532d447f8e" (UID: "f19ad196-b05b-4ade-ba2b-3b532d447f8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.112996 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-w4dbr"] Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.118332 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-w4dbr"] Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.141458 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f19ad196-b05b-4ade-ba2b-3b532d447f8e-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "f19ad196-b05b-4ade-ba2b-3b532d447f8e" (UID: "f19ad196-b05b-4ade-ba2b-3b532d447f8e"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.142304 4990 scope.go:117] "RemoveContainer" containerID="8d671cb47e36755ea226221a2086110a02ed86ebb73f448d8147ccab642c1517" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.163271 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4182c8b1-5c4d-4f6b-aeca-9492abf6069e-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "4182c8b1-5c4d-4f6b-aeca-9492abf6069e" (UID: "4182c8b1-5c4d-4f6b-aeca-9492abf6069e"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.179605 4990 scope.go:117] "RemoveContainer" containerID="bfcaa5ec5dc3e543e38527fe82f91c0c255a6b6ef7707eb92e012492bd0a3e24" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.180591 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ca5e656-876c-4e87-b049-5c284b211804-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ca5e656-876c-4e87-b049-5c284b211804" (UID: "9ca5e656-876c-4e87-b049-5c284b211804"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.183810 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f19ad196-b05b-4ade-ba2b-3b532d447f8e-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "f19ad196-b05b-4ade-ba2b-3b532d447f8e" (UID: "f19ad196-b05b-4ade-ba2b-3b532d447f8e"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.186433 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4182c8b1-5c4d-4f6b-aeca-9492abf6069e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4182c8b1-5c4d-4f6b-aeca-9492abf6069e" (UID: "4182c8b1-5c4d-4f6b-aeca-9492abf6069e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.190566 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca5e656-876c-4e87-b049-5c284b211804-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.190585 4990 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4182c8b1-5c4d-4f6b-aeca-9492abf6069e-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.190594 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4182c8b1-5c4d-4f6b-aeca-9492abf6069e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.190603 4990 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f19ad196-b05b-4ade-ba2b-3b532d447f8e-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.190611 4990 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f19ad196-b05b-4ade-ba2b-3b532d447f8e-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.190619 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f19ad196-b05b-4ade-ba2b-3b532d447f8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.193560 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4182c8b1-5c4d-4f6b-aeca-9492abf6069e-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "4182c8b1-5c4d-4f6b-aeca-9492abf6069e" (UID: "4182c8b1-5c4d-4f6b-aeca-9492abf6069e"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.262296 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4182c8b1-5c4d-4f6b-aeca-9492abf6069e-config-data" (OuterVolumeSpecName: "config-data") pod "4182c8b1-5c4d-4f6b-aeca-9492abf6069e" (UID: "4182c8b1-5c4d-4f6b-aeca-9492abf6069e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.292915 4990 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4182c8b1-5c4d-4f6b-aeca-9492abf6069e-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.292941 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4182c8b1-5c4d-4f6b-aeca-9492abf6069e-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.293860 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapiea53-account-delete-m8d6d"] Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.332025 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ca5e656-876c-4e87-b049-5c284b211804-config-data" (OuterVolumeSpecName: "config-data") pod "9ca5e656-876c-4e87-b049-5c284b211804" (UID: "9ca5e656-876c-4e87-b049-5c284b211804"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.385670 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glancebd6b-account-delete-9bmrw"] Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.396082 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ca5e656-876c-4e87-b049-5c284b211804-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.419606 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell05b09-account-delete-wcpvd"] Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.428048 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbicandd52-account-delete-2dsms"] Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.457855 4990 scope.go:117] "RemoveContainer" containerID="31641ed3f43c47e4a2506c3dcbee8f9106ba3dd06dc5c1aae7406895b483339c" Dec 05 01:35:38 crc kubenswrapper[4990]: W1205 01:35:38.591102 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7bf2416_2722_4ab6_a022_32116155fa68.slice/crio-3cf8a592f6307771d8d2c338cb9026f7c6b841c65ec7edb110af8e86004c2353 WatchSource:0}: Error finding container 3cf8a592f6307771d8d2c338cb9026f7c6b841c65ec7edb110af8e86004c2353: Status 404 returned error can't find the container with id 3cf8a592f6307771d8d2c338cb9026f7c6b841c65ec7edb110af8e86004c2353 Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.645921 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.662633 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.673836 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.706931 4990 scope.go:117] "RemoveContainer" containerID="d669b98ffce9d4d7e245b409d79ed12266001924abaaf1749664c34bb9dcf1d8" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.711973 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zrnp\" (UniqueName: \"kubernetes.io/projected/2c281c58-a95e-4669-bdfc-465759817928-kube-api-access-7zrnp\") pod \"2c281c58-a95e-4669-bdfc-465759817928\" (UID: \"2c281c58-a95e-4669-bdfc-465759817928\") " Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.712026 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c281c58-a95e-4669-bdfc-465759817928-operator-scripts\") pod \"2c281c58-a95e-4669-bdfc-465759817928\" (UID: \"2c281c58-a95e-4669-bdfc-465759817928\") " Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.712094 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2c281c58-a95e-4669-bdfc-465759817928-config-data-generated\") pod \"2c281c58-a95e-4669-bdfc-465759817928\" (UID: \"2c281c58-a95e-4669-bdfc-465759817928\") " Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.712133 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c281c58-a95e-4669-bdfc-465759817928-combined-ca-bundle\") pod \"2c281c58-a95e-4669-bdfc-465759817928\" (UID: \"2c281c58-a95e-4669-bdfc-465759817928\") " Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.712176 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c281c58-a95e-4669-bdfc-465759817928-galera-tls-certs\") pod \"2c281c58-a95e-4669-bdfc-465759817928\" (UID: \"2c281c58-a95e-4669-bdfc-465759817928\") " Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.712235 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2c281c58-a95e-4669-bdfc-465759817928-kolla-config\") pod \"2c281c58-a95e-4669-bdfc-465759817928\" (UID: \"2c281c58-a95e-4669-bdfc-465759817928\") " Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.712280 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2c281c58-a95e-4669-bdfc-465759817928-config-data-default\") pod \"2c281c58-a95e-4669-bdfc-465759817928\" (UID: \"2c281c58-a95e-4669-bdfc-465759817928\") " Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.712326 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"2c281c58-a95e-4669-bdfc-465759817928\" (UID: \"2c281c58-a95e-4669-bdfc-465759817928\") " Dec 05 01:35:38 crc kubenswrapper[4990]: E1205 01:35:38.712763 4990 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Dec 05 01:35:38 crc kubenswrapper[4990]: E1205 01:35:38.712817 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/809c1920-3205-411c-a8c1-ed027b7e3b1f-config-data podName:809c1920-3205-411c-a8c1-ed027b7e3b1f nodeName:}" failed. No retries permitted until 2025-12-05 01:35:42.712797335 +0000 UTC m=+1641.089012696 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/809c1920-3205-411c-a8c1-ed027b7e3b1f-config-data") pod "rabbitmq-server-0" (UID: "809c1920-3205-411c-a8c1-ed027b7e3b1f") : configmap "rabbitmq-config-data" not found Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.719445 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c281c58-a95e-4669-bdfc-465759817928-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "2c281c58-a95e-4669-bdfc-465759817928" (UID: "2c281c58-a95e-4669-bdfc-465759817928"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.722388 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c281c58-a95e-4669-bdfc-465759817928-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "2c281c58-a95e-4669-bdfc-465759817928" (UID: "2c281c58-a95e-4669-bdfc-465759817928"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.727729 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c281c58-a95e-4669-bdfc-465759817928-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "2c281c58-a95e-4669-bdfc-465759817928" (UID: "2c281c58-a95e-4669-bdfc-465759817928"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.727776 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c281c58-a95e-4669-bdfc-465759817928-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2c281c58-a95e-4669-bdfc-465759817928" (UID: "2c281c58-a95e-4669-bdfc-465759817928"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.738558 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.738952 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c281c58-a95e-4669-bdfc-465759817928-kube-api-access-7zrnp" (OuterVolumeSpecName: "kube-api-access-7zrnp") pod "2c281c58-a95e-4669-bdfc-465759817928" (UID: "2c281c58-a95e-4669-bdfc-465759817928"). InnerVolumeSpecName "kube-api-access-7zrnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.744065 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.761625 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-84997d8dc-hzdlp" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.795179 4990 scope.go:117] "RemoveContainer" containerID="125c5c338aa8e9845fa2e8e4ed1dc5dbc6d155571a90f33a2bce9b3578bb0527" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.813510 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bb029546-9d20-445a-9926-2a43c235a755-etc-swift\") pod \"bb029546-9d20-445a-9926-2a43c235a755\" (UID: \"bb029546-9d20-445a-9926-2a43c235a755\") " Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.813549 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb029546-9d20-445a-9926-2a43c235a755-public-tls-certs\") pod \"bb029546-9d20-445a-9926-2a43c235a755\" (UID: \"bb029546-9d20-445a-9926-2a43c235a755\") " Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.813580 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb029546-9d20-445a-9926-2a43c235a755-run-httpd\") pod \"bb029546-9d20-445a-9926-2a43c235a755\" (UID: \"bb029546-9d20-445a-9926-2a43c235a755\") " Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.813627 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb029546-9d20-445a-9926-2a43c235a755-log-httpd\") pod \"bb029546-9d20-445a-9926-2a43c235a755\" (UID: \"bb029546-9d20-445a-9926-2a43c235a755\") " Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.813699 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb029546-9d20-445a-9926-2a43c235a755-combined-ca-bundle\") pod \"bb029546-9d20-445a-9926-2a43c235a755\" (UID: \"bb029546-9d20-445a-9926-2a43c235a755\") " Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.813758 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmzzd\" (UniqueName: \"kubernetes.io/projected/bb029546-9d20-445a-9926-2a43c235a755-kube-api-access-kmzzd\") pod \"bb029546-9d20-445a-9926-2a43c235a755\" (UID: \"bb029546-9d20-445a-9926-2a43c235a755\") " Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.813807 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb029546-9d20-445a-9926-2a43c235a755-config-data\") pod \"bb029546-9d20-445a-9926-2a43c235a755\" (UID: \"bb029546-9d20-445a-9926-2a43c235a755\") " Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.813833 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb029546-9d20-445a-9926-2a43c235a755-internal-tls-certs\") pod \"bb029546-9d20-445a-9926-2a43c235a755\" (UID: \"bb029546-9d20-445a-9926-2a43c235a755\") " Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.814208 4990 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2c281c58-a95e-4669-bdfc-465759817928-config-data-generated\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.814222 4990 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2c281c58-a95e-4669-bdfc-465759817928-kolla-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.814231 4990 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2c281c58-a95e-4669-bdfc-465759817928-config-data-default\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.814239 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zrnp\" (UniqueName: \"kubernetes.io/projected/2c281c58-a95e-4669-bdfc-465759817928-kube-api-access-7zrnp\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.814247 4990 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c281c58-a95e-4669-bdfc-465759817928-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.814511 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb029546-9d20-445a-9926-2a43c235a755-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bb029546-9d20-445a-9926-2a43c235a755" (UID: "bb029546-9d20-445a-9926-2a43c235a755"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.814871 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb029546-9d20-445a-9926-2a43c235a755-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bb029546-9d20-445a-9926-2a43c235a755" (UID: "bb029546-9d20-445a-9926-2a43c235a755"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.820568 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "mysql-db") pod "2c281c58-a95e-4669-bdfc-465759817928" (UID: "2c281c58-a95e-4669-bdfc-465759817928"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.832689 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb029546-9d20-445a-9926-2a43c235a755-kube-api-access-kmzzd" (OuterVolumeSpecName: "kube-api-access-kmzzd") pod "bb029546-9d20-445a-9926-2a43c235a755" (UID: "bb029546-9d20-445a-9926-2a43c235a755"). InnerVolumeSpecName "kube-api-access-kmzzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.833640 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb029546-9d20-445a-9926-2a43c235a755-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "bb029546-9d20-445a-9926-2a43c235a755" (UID: "bb029546-9d20-445a-9926-2a43c235a755"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.876907 4990 scope.go:117] "RemoveContainer" containerID="125c5c338aa8e9845fa2e8e4ed1dc5dbc6d155571a90f33a2bce9b3578bb0527" Dec 05 01:35:38 crc kubenswrapper[4990]: E1205 01:35:38.877427 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"125c5c338aa8e9845fa2e8e4ed1dc5dbc6d155571a90f33a2bce9b3578bb0527\": container with ID starting with 125c5c338aa8e9845fa2e8e4ed1dc5dbc6d155571a90f33a2bce9b3578bb0527 not found: ID does not exist" containerID="125c5c338aa8e9845fa2e8e4ed1dc5dbc6d155571a90f33a2bce9b3578bb0527" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.877467 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"125c5c338aa8e9845fa2e8e4ed1dc5dbc6d155571a90f33a2bce9b3578bb0527"} err="failed to get container status \"125c5c338aa8e9845fa2e8e4ed1dc5dbc6d155571a90f33a2bce9b3578bb0527\": rpc error: code = NotFound desc = could not find container \"125c5c338aa8e9845fa2e8e4ed1dc5dbc6d155571a90f33a2bce9b3578bb0527\": container with ID starting with 125c5c338aa8e9845fa2e8e4ed1dc5dbc6d155571a90f33a2bce9b3578bb0527 not found: ID does not exist" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.877822 4990 scope.go:117] "RemoveContainer" containerID="7fa77ce9f772e359c9e72bc3678b54f2352122e3c3b7171d426cd33a61531716" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.882447 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c281c58-a95e-4669-bdfc-465759817928-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c281c58-a95e-4669-bdfc-465759817928" (UID: "2c281c58-a95e-4669-bdfc-465759817928"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.915735 4990 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb029546-9d20-445a-9926-2a43c235a755-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.915759 4990 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb029546-9d20-445a-9926-2a43c235a755-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.915770 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c281c58-a95e-4669-bdfc-465759817928-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.915779 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmzzd\" (UniqueName: \"kubernetes.io/projected/bb029546-9d20-445a-9926-2a43c235a755-kube-api-access-kmzzd\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.915790 4990 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bb029546-9d20-445a-9926-2a43c235a755-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.915812 4990 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.938087 4990 scope.go:117] "RemoveContainer" containerID="a95fb654a91d91d71d02b450e0f66b15586b768fefea64199189813f9b0c58c3" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.976010 4990 scope.go:117] "RemoveContainer" containerID="7fa77ce9f772e359c9e72bc3678b54f2352122e3c3b7171d426cd33a61531716" Dec 05 01:35:38 crc kubenswrapper[4990]: E1205 01:35:38.979813 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fa77ce9f772e359c9e72bc3678b54f2352122e3c3b7171d426cd33a61531716\": container with ID starting with 7fa77ce9f772e359c9e72bc3678b54f2352122e3c3b7171d426cd33a61531716 not found: ID does not exist" containerID="7fa77ce9f772e359c9e72bc3678b54f2352122e3c3b7171d426cd33a61531716" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.979846 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fa77ce9f772e359c9e72bc3678b54f2352122e3c3b7171d426cd33a61531716"} err="failed to get container status \"7fa77ce9f772e359c9e72bc3678b54f2352122e3c3b7171d426cd33a61531716\": rpc error: code = NotFound desc = could not find container \"7fa77ce9f772e359c9e72bc3678b54f2352122e3c3b7171d426cd33a61531716\": container with ID starting with 7fa77ce9f772e359c9e72bc3678b54f2352122e3c3b7171d426cd33a61531716 not found: ID does not exist" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.979872 4990 scope.go:117] "RemoveContainer" containerID="a95fb654a91d91d71d02b450e0f66b15586b768fefea64199189813f9b0c58c3" Dec 05 01:35:38 crc kubenswrapper[4990]: E1205 01:35:38.981767 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a95fb654a91d91d71d02b450e0f66b15586b768fefea64199189813f9b0c58c3\": container with ID starting with a95fb654a91d91d71d02b450e0f66b15586b768fefea64199189813f9b0c58c3 not found: ID does not exist" containerID="a95fb654a91d91d71d02b450e0f66b15586b768fefea64199189813f9b0c58c3" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.981797 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a95fb654a91d91d71d02b450e0f66b15586b768fefea64199189813f9b0c58c3"} err="failed to get container status \"a95fb654a91d91d71d02b450e0f66b15586b768fefea64199189813f9b0c58c3\": rpc error: code = NotFound desc = could not find container \"a95fb654a91d91d71d02b450e0f66b15586b768fefea64199189813f9b0c58c3\": container with ID starting with a95fb654a91d91d71d02b450e0f66b15586b768fefea64199189813f9b0c58c3 not found: ID does not exist" Dec 05 01:35:38 crc kubenswrapper[4990]: I1205 01:35:38.981810 4990 scope.go:117] "RemoveContainer" containerID="36b542039ff8a88aa766414141937ec39af2f0202d964f9ecf53fabdb9300f96" Dec 05 01:35:39 crc kubenswrapper[4990]: E1205 01:35:39.019722 4990 secret.go:188] Couldn't get secret openstack/barbican-api-config-data: secret "barbican-api-config-data" not found Dec 05 01:35:39 crc kubenswrapper[4990]: E1205 01:35:39.019811 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-config-data-custom podName:2c7241f3-92bb-4295-97d9-4284784b11f3 nodeName:}" failed. No retries permitted until 2025-12-05 01:35:43.019785272 +0000 UTC m=+1641.396000633 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-config-data-custom") pod "barbican-api-78f948dd74-zmh7q" (UID: "2c7241f3-92bb-4295-97d9-4284784b11f3") : secret "barbican-api-config-data" not found Dec 05 01:35:39 crc kubenswrapper[4990]: E1205 01:35:39.021468 4990 secret.go:188] Couldn't get secret openstack/barbican-config-data: secret "barbican-config-data" not found Dec 05 01:35:39 crc kubenswrapper[4990]: E1205 01:35:39.021540 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-config-data podName:2c7241f3-92bb-4295-97d9-4284784b11f3 nodeName:}" failed. No retries permitted until 2025-12-05 01:35:43.021523261 +0000 UTC m=+1641.397738622 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-config-data") pod "barbican-api-78f948dd74-zmh7q" (UID: "2c7241f3-92bb-4295-97d9-4284784b11f3") : secret "barbican-config-data" not found Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.045084 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.052083 4990 scope.go:117] "RemoveContainer" containerID="36b542039ff8a88aa766414141937ec39af2f0202d964f9ecf53fabdb9300f96" Dec 05 01:35:39 crc kubenswrapper[4990]: E1205 01:35:39.052514 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36b542039ff8a88aa766414141937ec39af2f0202d964f9ecf53fabdb9300f96\": container with ID starting with 36b542039ff8a88aa766414141937ec39af2f0202d964f9ecf53fabdb9300f96 not found: ID does not exist" containerID="36b542039ff8a88aa766414141937ec39af2f0202d964f9ecf53fabdb9300f96" Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.052545 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36b542039ff8a88aa766414141937ec39af2f0202d964f9ecf53fabdb9300f96"} err="failed to get container status \"36b542039ff8a88aa766414141937ec39af2f0202d964f9ecf53fabdb9300f96\": rpc error: code = NotFound desc = could not find container \"36b542039ff8a88aa766414141937ec39af2f0202d964f9ecf53fabdb9300f96\": container with ID starting with 36b542039ff8a88aa766414141937ec39af2f0202d964f9ecf53fabdb9300f96 not found: ID does not exist" Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.088790 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c8cbf17b-4408-40ea-81bd-c70478cf6095" containerName="ceilometer-central-agent" containerID="cri-o://31ce6ea891092f920fafd58685b5970d0c8960a1faf1c70db62854e2178e153a" gracePeriod=30 Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.090213 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c8cbf17b-4408-40ea-81bd-c70478cf6095" containerName="sg-core" containerID="cri-o://72b6662721b483f07b892ef1907c45dc83be4d09b4ac2d3ef321bc8da7ab9d10" gracePeriod=30 Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.090286 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c8cbf17b-4408-40ea-81bd-c70478cf6095" containerName="proxy-httpd" containerID="cri-o://60607d382cd8bda26a5778bed70f82be69af7e7f24f195984c6f642727d62e2c" gracePeriod=30 Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.090332 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c8cbf17b-4408-40ea-81bd-c70478cf6095" containerName="ceilometer-notification-agent" containerID="cri-o://caed40083fa597fe943a30fe27bc0e925ac084161f972ab054dae2a9368983ca" gracePeriod=30 Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.096101 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.096288 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="510e9e75-fc35-4bed-8e71-c6e27069f50a" containerName="kube-state-metrics" containerID="cri-o://96311983bd4bbe76a84ad5addf79e1a778ba9e353f3375b6a5513195e6d9b82e" gracePeriod=30 Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.125333 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78f948dd74-zmh7q" event={"ID":"2c7241f3-92bb-4295-97d9-4284784b11f3","Type":"ContainerStarted","Data":"327fbdffbbf5497e0035e561919c3378443e7b079ffc004906f3ba795a9a7d96"} Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.125531 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-78f948dd74-zmh7q" podUID="2c7241f3-92bb-4295-97d9-4284784b11f3" containerName="barbican-api-log" containerID="cri-o://8b4a5c95a5bafa9fdced2d0ae48a57fdaa45a090e28899f2b4a2464fc1d5c263" gracePeriod=30 Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.125840 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-78f948dd74-zmh7q" Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.125886 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-78f948dd74-zmh7q" podUID="2c7241f3-92bb-4295-97d9-4284784b11f3" containerName="barbican-api" containerID="cri-o://327fbdffbbf5497e0035e561919c3378443e7b079ffc004906f3ba795a9a7d96" gracePeriod=30 Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.149378 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2c281c58-a95e-4669-bdfc-465759817928","Type":"ContainerDied","Data":"5a29c412dac5423a08ae9fce6d1087baa1761f3318624a0bbfc2adcf8be0a81b"} Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.149428 4990 scope.go:117] "RemoveContainer" containerID="d296b3f03577d11daa223c38effcb4c833eb5944cd5256ee247941d30a2772a5" Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.149725 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.151010 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.151255 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="cd4b299f-9ab6-4714-b911-9b1e11708f39" containerName="memcached" containerID="cri-o://1337738b96b97c494ef162bac005232edc2ea2057d25e1bca729e2912f0fc44b" gracePeriod=30 Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.185191 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glancebd6b-account-delete-9bmrw" event={"ID":"b325e8cb-5fb2-4543-ad3c-c9f42a4572f0","Type":"ContainerStarted","Data":"dd7173133a9e289c66b00f347477e4f6aac22f12b8529a054821920fd94abee6"} Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.201683 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-78f948dd74-zmh7q" podStartSLOduration=5.201661249 podStartE2EDuration="5.201661249s" podCreationTimestamp="2025-12-05 01:35:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:35:39.178761471 +0000 UTC m=+1637.554976832" watchObservedRunningTime="2025-12-05 01:35:39.201661249 +0000 UTC m=+1637.577876600" Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.238693 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement06fa-account-delete-fb874" event={"ID":"ac1cabc4-d51d-43b6-8903-f098d13c1952","Type":"ContainerStarted","Data":"92f4dcfe11255c35e4976d54542faeb78216ee28cd1de89aeb76530d56bf5cf2"} Dec 05 01:35:39 crc kubenswrapper[4990]: E1205 01:35:39.238767 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940 is running failed: container process not found" containerID="edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 01:35:39 crc kubenswrapper[4990]: E1205 01:35:39.248742 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="88c86c95e39217d930a01ef924d917927e9b97fa3c53963b2fe430bae34fff01" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 01:35:39 crc kubenswrapper[4990]: E1205 01:35:39.248877 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940 is running failed: container process not found" containerID="edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 01:35:39 crc kubenswrapper[4990]: E1205 01:35:39.259006 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="88c86c95e39217d930a01ef924d917927e9b97fa3c53963b2fe430bae34fff01" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 01:35:39 crc kubenswrapper[4990]: E1205 01:35:39.259181 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940 is running failed: container process not found" containerID="edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 01:35:39 crc kubenswrapper[4990]: E1205 01:35:39.259203 4990 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-2j9fb" podUID="d833c1a0-9e88-4ad3-8bcc-5904d459903a" containerName="ovsdb-server" Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.262316 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-hr68g"] Dec 05 01:35:39 crc kubenswrapper[4990]: E1205 01:35:39.266224 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="88c86c95e39217d930a01ef924d917927e9b97fa3c53963b2fe430bae34fff01" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 01:35:39 crc kubenswrapper[4990]: E1205 01:35:39.266740 4990 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-2j9fb" podUID="d833c1a0-9e88-4ad3-8bcc-5904d459903a" containerName="ovs-vswitchd" Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.267465 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell05b09-account-delete-wcpvd" event={"ID":"c7bf2416-2722-4ab6-a022-32116155fa68","Type":"ContainerStarted","Data":"3cf8a592f6307771d8d2c338cb9026f7c6b841c65ec7edb110af8e86004c2353"} Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.277366 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-hr68g"] Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.301703 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-vm2sd"] Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.310684 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-vm2sd"] Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.315064 4990 scope.go:117] "RemoveContainer" containerID="2cf0fa6b65b48acaf9fa9180f44998e45eea4e56bcbf9a49157e844415633e4c" Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.330007 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5dc9df8c96-j8dx7"] Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.330221 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-5dc9df8c96-j8dx7" podUID="dd33dbb9-4e51-47db-8129-a93493234f7f" containerName="keystone-api" containerID="cri-o://ddee66ac66bbe9676ade95661263c28f7cfb48141f52a3c8dcd54d952118736b" gracePeriod=30 Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.335396 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapiea53-account-delete-m8d6d" event={"ID":"82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d","Type":"ContainerStarted","Data":"db5ab2ad34b418acceaae86c6d848fa12ca745c5af6facdb984f6b78167d897b"} Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.335874 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.352129 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-kzhdt"] Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.357423 4990 generic.go:334] "Generic (PLEG): container finished" podID="82eb03c9-869c-447d-9b78-b4ef916b59ac" containerID="5d677ac4cf7f17763cb57fdbd241dbbc43ee718b5236104aeaebafadb2a7637a" exitCode=0 Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.357804 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c5f858c6d-zxwsh" event={"ID":"82eb03c9-869c-447d-9b78-b4ef916b59ac","Type":"ContainerDied","Data":"5d677ac4cf7f17763cb57fdbd241dbbc43ee718b5236104aeaebafadb2a7637a"} Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.364013 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cindera4a4-account-delete-kpsxn" event={"ID":"ab630416-46f3-495f-92c2-732abce81632","Type":"ContainerStarted","Data":"78ad1f85b89a93447d310403e8048ded243b7fcf6bbb3b6bcfcc95d09d0ca2a1"} Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.364068 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cindera4a4-account-delete-kpsxn" event={"ID":"ab630416-46f3-495f-92c2-732abce81632","Type":"ContainerStarted","Data":"189d5e322c0f1073b0da2f66a1183cad81f6ef385526d60e1a49b439cbc20fc0"} Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.364634 4990 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/cindera4a4-account-delete-kpsxn" secret="" err="secret \"galera-openstack-dockercfg-j8mtq\" not found" Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.368218 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-kzhdt"] Dec 05 01:35:39 crc kubenswrapper[4990]: E1205 01:35:39.378179 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="99e3f4b0483634358a7d1235ce5eb8570a7f7ed07fa299cea8aa4652c97c14e8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.378353 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbicandd52-account-delete-2dsms" event={"ID":"64bbbfd0-59f8-4fb6-8761-503cdf8b9f36","Type":"ContainerStarted","Data":"ac58697418ac121fa73d0b9a29d202e88198028d1e67c9e8bcc9f85d96288120"} Dec 05 01:35:39 crc kubenswrapper[4990]: E1205 01:35:39.380585 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="99e3f4b0483634358a7d1235ce5eb8570a7f7ed07fa299cea8aa4652c97c14e8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 05 01:35:39 crc kubenswrapper[4990]: E1205 01:35:39.385525 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="99e3f4b0483634358a7d1235ce5eb8570a7f7ed07fa299cea8aa4652c97c14e8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 05 01:35:39 crc kubenswrapper[4990]: E1205 01:35:39.385581 4990 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="23fef2f1-b3e2-4d6f-8beb-efd01386d758" containerName="nova-cell1-conductor-conductor" Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.392534 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6a81-account-create-update-xj7vd"] Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.392651 4990 generic.go:334] "Generic (PLEG): container finished" podID="bb029546-9d20-445a-9926-2a43c235a755" containerID="16fbd3724b89f938c175ee9ce25a3436b47a2fa6ff90036a1c918e6369ba703e" exitCode=0 Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.392948 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-84997d8dc-hzdlp" Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.393429 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-84997d8dc-hzdlp" event={"ID":"bb029546-9d20-445a-9926-2a43c235a755","Type":"ContainerDied","Data":"16fbd3724b89f938c175ee9ce25a3436b47a2fa6ff90036a1c918e6369ba703e"} Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.393469 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-84997d8dc-hzdlp" event={"ID":"bb029546-9d20-445a-9926-2a43c235a755","Type":"ContainerDied","Data":"bb3763b8b6534931b38313a1a9508913b7ac093a9fd458f9f2a28d6377938a2a"} Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.403163 4990 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="10384219-030b-491b-884f-fd761eba4496" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.161:8776/healthcheck\": read tcp 10.217.0.2:39114->10.217.0.161:8776: read: connection reset by peer" Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.410279 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutronc36c-account-delete-nt5gn" event={"ID":"0d7643ce-5dd7-48dc-9023-6502e5b0a05a","Type":"ContainerStarted","Data":"c65a15b3e0d4a2a9e3d5963bf83e644314077700d7427904f9b97f6e5c076851"} Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.411277 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6a81-account-create-update-xj7vd"] Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.414815 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7599ccc789-q6ldt" event={"ID":"e94da38c-b2d3-4ddb-b032-a6e5bfa62145","Type":"ContainerStarted","Data":"4ebf1c229577d1f3c048b69998d4a50fa699eca8648698967f1adc59a1ef23d9"} Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.419432 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cindera4a4-account-delete-kpsxn" podStartSLOduration=5.419415351 podStartE2EDuration="5.419415351s" podCreationTimestamp="2025-12-05 01:35:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:35:39.378255686 +0000 UTC m=+1637.754471047" watchObservedRunningTime="2025-12-05 01:35:39.419415351 +0000 UTC m=+1637.795630712" Dec 05 01:35:39 crc kubenswrapper[4990]: E1205 01:35:39.434208 4990 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 01:35:39 crc kubenswrapper[4990]: E1205 01:35:39.434288 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab630416-46f3-495f-92c2-732abce81632-operator-scripts podName:ab630416-46f3-495f-92c2-732abce81632 nodeName:}" failed. No retries permitted until 2025-12-05 01:35:39.934274851 +0000 UTC m=+1638.310490212 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/ab630416-46f3-495f-92c2-732abce81632-operator-scripts") pod "cindera4a4-account-delete-kpsxn" (UID: "ab630416-46f3-495f-92c2-732abce81632") : configmap "openstack-scripts" not found Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.470025 4990 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-nbpzw" podUID="d269e431-18be-4f4a-a63f-fee37cf08d46" containerName="ovn-controller" probeResult="failure" output=< Dec 05 01:35:39 crc kubenswrapper[4990]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Dec 05 01:35:39 crc kubenswrapper[4990]: > Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.490779 4990 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ba3c2a5d-0bec-4905-8cba-d0e565643fe7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": read tcp 10.217.0.2:46510->10.217.0.198:8775: read: connection reset by peer" Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.491166 4990 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ba3c2a5d-0bec-4905-8cba-d0e565643fe7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": read tcp 10.217.0.2:46508->10.217.0.198:8775: read: connection reset by peer" Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.539958 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-8d55n"] Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.550696 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-8d55n"] Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.556662 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-06fa-account-create-update-tjxv8"] Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.577318 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-06fa-account-create-update-tjxv8"] Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.591353 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement06fa-account-delete-fb874"] Dec 05 01:35:39 crc kubenswrapper[4990]: E1205 01:35:39.610676 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8eb62300cc3ccbd37e11d39589f93dfecbfed82d1d1d22eb835f940823d41073" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.642784 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-rm8cx"] Dec 05 01:35:39 crc kubenswrapper[4990]: E1205 01:35:39.650376 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8eb62300cc3ccbd37e11d39589f93dfecbfed82d1d1d22eb835f940823d41073" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.667767 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-rm8cx"] Dec 05 01:35:39 crc kubenswrapper[4990]: E1205 01:35:39.672210 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8eb62300cc3ccbd37e11d39589f93dfecbfed82d1d1d22eb835f940823d41073" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 01:35:39 crc kubenswrapper[4990]: E1205 01:35:39.672259 4990 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="0dc80822-8cd5-4004-abdd-160ad6dcdd72" containerName="nova-scheduler-scheduler" Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.685861 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cindera4a4-account-delete-kpsxn"] Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.694046 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-a4a4-account-create-update-rvgtj"] Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.699122 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-a4a4-account-create-update-rvgtj"] Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.705638 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-b4mqj"] Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.713333 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-b4mqj"] Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.722893 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutronc36c-account-delete-nt5gn"] Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.730135 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c36c-account-create-update-qwx75"] Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.732560 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-c36c-account-create-update-qwx75"] Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.778858 4990 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.841953 4990 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.898918 4990 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-74bb84bc86-8krlf" podUID="4489a490-bacc-498c-b0e3-d6b5cad13d34" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:60926->10.217.0.160:9311: read: connection reset by peer" Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.899747 4990 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-74bb84bc86-8krlf" podUID="4489a490-bacc-498c-b0e3-d6b5cad13d34" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:60934->10.217.0.160:9311: read: connection reset by peer" Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.908305 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c281c58-a95e-4669-bdfc-465759817928-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "2c281c58-a95e-4669-bdfc-465759817928" (UID: "2c281c58-a95e-4669-bdfc-465759817928"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.943959 4990 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c281c58-a95e-4669-bdfc-465759817928-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:39 crc kubenswrapper[4990]: E1205 01:35:39.944098 4990 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 01:35:39 crc kubenswrapper[4990]: E1205 01:35:39.944145 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab630416-46f3-495f-92c2-732abce81632-operator-scripts podName:ab630416-46f3-495f-92c2-732abce81632 nodeName:}" failed. No retries permitted until 2025-12-05 01:35:40.944131239 +0000 UTC m=+1639.320346600 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/ab630416-46f3-495f-92c2-732abce81632-operator-scripts") pod "cindera4a4-account-delete-kpsxn" (UID: "ab630416-46f3-495f-92c2-732abce81632") : configmap "openstack-scripts" not found Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.963166 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e741872-f2ad-40b0-9447-797f97e11c82" path="/var/lib/kubelet/pods/0e741872-f2ad-40b0-9447-797f97e11c82/volumes" Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.964008 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e99c8a9-852a-446a-b4bf-ff8da617539f" path="/var/lib/kubelet/pods/2e99c8a9-852a-446a-b4bf-ff8da617539f/volumes" Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.964768 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93" path="/var/lib/kubelet/pods/3527aa1f-5b73-4fc1-b22e-0b6ee83a0e93/volumes" Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.965258 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4182c8b1-5c4d-4f6b-aeca-9492abf6069e" path="/var/lib/kubelet/pods/4182c8b1-5c4d-4f6b-aeca-9492abf6069e/volumes" Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.966309 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42546ba1-6f6e-437c-90f1-53368b287b1a" path="/var/lib/kubelet/pods/42546ba1-6f6e-437c-90f1-53368b287b1a/volumes" Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.966834 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="504df873-8902-4568-b465-40d75b755fee" path="/var/lib/kubelet/pods/504df873-8902-4568-b465-40d75b755fee/volumes" Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.967394 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62a8c0e5-85b3-46e5-8e1f-3939a1eafc14" path="/var/lib/kubelet/pods/62a8c0e5-85b3-46e5-8e1f-3939a1eafc14/volumes" Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.968639 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c137d1b-6433-40ac-8036-84313eef1967" path="/var/lib/kubelet/pods/7c137d1b-6433-40ac-8036-84313eef1967/volumes" Dec 05 01:35:39 crc kubenswrapper[4990]: I1205 01:35:39.985540 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87a54837-9e3c-4d46-a016-c53e3576aad3" path="/var/lib/kubelet/pods/87a54837-9e3c-4d46-a016-c53e3576aad3/volumes" Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.018111 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ca5e656-876c-4e87-b049-5c284b211804" path="/var/lib/kubelet/pods/9ca5e656-876c-4e87-b049-5c284b211804/volumes" Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.018794 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e" path="/var/lib/kubelet/pods/a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e/volumes" Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.027744 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8fe91e1-c026-47ec-a537-4bfd8c6ad73f" path="/var/lib/kubelet/pods/a8fe91e1-c026-47ec-a537-4bfd8c6ad73f/volumes" Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.028957 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b" path="/var/lib/kubelet/pods/c8e2cb99-8fef-442d-b01d-29fcd5ea8f5b/volumes" Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.029533 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5b4c8de-486d-48ba-9db4-3f50e5b4f958" path="/var/lib/kubelet/pods/d5b4c8de-486d-48ba-9db4-3f50e5b4f958/volumes" Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.074148 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f19ad196-b05b-4ade-ba2b-3b532d447f8e" path="/var/lib/kubelet/pods/f19ad196-b05b-4ade-ba2b-3b532d447f8e/volumes" Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.075304 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4475723-8c01-483c-991d-d686c6361021" path="/var/lib/kubelet/pods/f4475723-8c01-483c-991d-d686c6361021/volumes" Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.230010 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb029546-9d20-445a-9926-2a43c235a755-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb029546-9d20-445a-9926-2a43c235a755" (UID: "bb029546-9d20-445a-9926-2a43c235a755"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.272717 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb029546-9d20-445a-9926-2a43c235a755-config-data" (OuterVolumeSpecName: "config-data") pod "bb029546-9d20-445a-9926-2a43c235a755" (UID: "bb029546-9d20-445a-9926-2a43c235a755"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.310856 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb029546-9d20-445a-9926-2a43c235a755-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.310882 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb029546-9d20-445a-9926-2a43c235a755-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.428662 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb029546-9d20-445a-9926-2a43c235a755-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bb029546-9d20-445a-9926-2a43c235a755" (UID: "bb029546-9d20-445a-9926-2a43c235a755"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.451618 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb029546-9d20-445a-9926-2a43c235a755-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bb029546-9d20-445a-9926-2a43c235a755" (UID: "bb029546-9d20-445a-9926-2a43c235a755"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.460926 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-57775f7b86-mwzx9" podUID="0cfb17a8-ecc2-4fa8-85e0-439d19b01b97" containerName="barbican-keystone-listener-log" containerID="cri-o://3fea20119ef314b71415f0e498137e3f7e618ac787a6fbcf0557233df608d6ed" gracePeriod=30 Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.461378 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-57775f7b86-mwzx9" podUID="0cfb17a8-ecc2-4fa8-85e0-439d19b01b97" containerName="barbican-keystone-listener" containerID="cri-o://8bc4083818a23adcc4860fdba5de05f1178e309898e0aed33be3be52454aeccc" gracePeriod=30 Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.483083 4990 generic.go:334] "Generic (PLEG): container finished" podID="c7bf2416-2722-4ab6-a022-32116155fa68" containerID="2395f00493820a5ebb30a4c605fb8d5f23ada6746a45d56bd39a11683da2b3c3" exitCode=1 Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.483608 4990 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/novacell05b09-account-delete-wcpvd" secret="" err="secret \"galera-openstack-dockercfg-j8mtq\" not found" Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.483635 4990 scope.go:117] "RemoveContainer" containerID="2395f00493820a5ebb30a4c605fb8d5f23ada6746a45d56bd39a11683da2b3c3" Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.501107 4990 generic.go:334] "Generic (PLEG): container finished" podID="ab630416-46f3-495f-92c2-732abce81632" containerID="78ad1f85b89a93447d310403e8048ded243b7fcf6bbb3b6bcfcc95d09d0ca2a1" exitCode=1 Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.501793 4990 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/cindera4a4-account-delete-kpsxn" secret="" err="secret \"galera-openstack-dockercfg-j8mtq\" not found" Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.501819 4990 scope.go:117] "RemoveContainer" containerID="78ad1f85b89a93447d310403e8048ded243b7fcf6bbb3b6bcfcc95d09d0ca2a1" Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.520794 4990 generic.go:334] "Generic (PLEG): container finished" podID="23fef2f1-b3e2-4d6f-8beb-efd01386d758" containerID="99e3f4b0483634358a7d1235ce5eb8570a7f7ed07fa299cea8aa4652c97c14e8" exitCode=0 Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.525155 4990 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb029546-9d20-445a-9926-2a43c235a755-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.525512 4990 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb029546-9d20-445a-9926-2a43c235a755-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.540153 4990 generic.go:334] "Generic (PLEG): container finished" podID="ac1cabc4-d51d-43b6-8903-f098d13c1952" containerID="7321a2542512be340cbfde5ad63b19280567b9288afa32dc396d30a73fafa0d8" exitCode=1 Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.540969 4990 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/placement06fa-account-delete-fb874" secret="" err="secret \"galera-openstack-dockercfg-j8mtq\" not found" Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.541085 4990 scope.go:117] "RemoveContainer" containerID="7321a2542512be340cbfde5ad63b19280567b9288afa32dc396d30a73fafa0d8" Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.549716 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="00beb76a-d4d2-4cd8-bc04-e268c2397388" containerName="galera" containerID="cri-o://6290c870625c2bc24d2bbf7c61e3acaf0e8d3f3f7f3e22832b84d2f5bf16b234" gracePeriod=29 Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.555749 4990 generic.go:334] "Generic (PLEG): container finished" podID="47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3" containerID="5f490bc39eb1a824091b54123c3705eafc0ddd4d3bb92d574be2d6b179034a7a" exitCode=0 Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.557927 4990 generic.go:334] "Generic (PLEG): container finished" podID="510e9e75-fc35-4bed-8e71-c6e27069f50a" containerID="96311983bd4bbe76a84ad5addf79e1a778ba9e353f3375b6a5513195e6d9b82e" exitCode=2 Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.560767 4990 generic.go:334] "Generic (PLEG): container finished" podID="64bbbfd0-59f8-4fb6-8761-503cdf8b9f36" containerID="003addad4fd04e4704b5108d770c5e33bbf4691ed94949868baed6f57737b3e3" exitCode=1 Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.561366 4990 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/barbicandd52-account-delete-2dsms" secret="" err="secret \"galera-openstack-dockercfg-j8mtq\" not found" Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.561399 4990 scope.go:117] "RemoveContainer" containerID="003addad4fd04e4704b5108d770c5e33bbf4691ed94949868baed6f57737b3e3" Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.598919 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-57775f7b86-mwzx9" podStartSLOduration=7.598901457 podStartE2EDuration="7.598901457s" podCreationTimestamp="2025-12-05 01:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:35:40.494690948 +0000 UTC m=+1638.870906309" watchObservedRunningTime="2025-12-05 01:35:40.598901457 +0000 UTC m=+1638.975116808" Dec 05 01:35:40 crc kubenswrapper[4990]: E1205 01:35:40.649423 4990 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 01:35:40 crc kubenswrapper[4990]: E1205 01:35:40.649514 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/64bbbfd0-59f8-4fb6-8761-503cdf8b9f36-operator-scripts podName:64bbbfd0-59f8-4fb6-8761-503cdf8b9f36 nodeName:}" failed. No retries permitted until 2025-12-05 01:35:41.149497009 +0000 UTC m=+1639.525712370 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/64bbbfd0-59f8-4fb6-8761-503cdf8b9f36-operator-scripts") pod "barbicandd52-account-delete-2dsms" (UID: "64bbbfd0-59f8-4fb6-8761-503cdf8b9f36") : configmap "openstack-scripts" not found Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.650132 4990 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/glancebd6b-account-delete-9bmrw" secret="" err="secret \"galera-openstack-dockercfg-j8mtq\" not found" Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.651049 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glancebd6b-account-delete-9bmrw" podStartSLOduration=6.651029512 podStartE2EDuration="6.651029512s" podCreationTimestamp="2025-12-05 01:35:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:35:40.641497933 +0000 UTC m=+1639.017713294" watchObservedRunningTime="2025-12-05 01:35:40.651029512 +0000 UTC m=+1639.027244873" Dec 05 01:35:40 crc kubenswrapper[4990]: E1205 01:35:40.651226 4990 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 01:35:40 crc kubenswrapper[4990]: E1205 01:35:40.651288 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c7bf2416-2722-4ab6-a022-32116155fa68-operator-scripts podName:c7bf2416-2722-4ab6-a022-32116155fa68 nodeName:}" failed. No retries permitted until 2025-12-05 01:35:41.151256239 +0000 UTC m=+1639.527471600 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/c7bf2416-2722-4ab6-a022-32116155fa68-operator-scripts") pod "novacell05b09-account-delete-wcpvd" (UID: "c7bf2416-2722-4ab6-a022-32116155fa68") : configmap "openstack-scripts" not found Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.682423 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-7599ccc789-q6ldt" podUID="e94da38c-b2d3-4ddb-b032-a6e5bfa62145" containerName="barbican-worker" containerID="cri-o://cc5093cdab379f16035c70128d1dfeb6ff51bf6e5e36c4ec7ac5f5e887cecd99" gracePeriod=30 Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.682253 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-7599ccc789-q6ldt" podUID="e94da38c-b2d3-4ddb-b032-a6e5bfa62145" containerName="barbican-worker-log" containerID="cri-o://4ebf1c229577d1f3c048b69998d4a50fa699eca8648698967f1adc59a1ef23d9" gracePeriod=30 Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.696061 4990 generic.go:334] "Generic (PLEG): container finished" podID="c8cbf17b-4408-40ea-81bd-c70478cf6095" containerID="60607d382cd8bda26a5778bed70f82be69af7e7f24f195984c6f642727d62e2c" exitCode=0 Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.696088 4990 generic.go:334] "Generic (PLEG): container finished" podID="c8cbf17b-4408-40ea-81bd-c70478cf6095" containerID="72b6662721b483f07b892ef1907c45dc83be4d09b4ac2d3ef321bc8da7ab9d10" exitCode=2 Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.696095 4990 generic.go:334] "Generic (PLEG): container finished" podID="c8cbf17b-4408-40ea-81bd-c70478cf6095" containerID="31ce6ea891092f920fafd58685b5970d0c8960a1faf1c70db62854e2178e153a" exitCode=0 Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.698703 4990 generic.go:334] "Generic (PLEG): container finished" podID="4489a490-bacc-498c-b0e3-d6b5cad13d34" containerID="6ddb956e1bc6923210a8635165e707d600786748db6d8f83199e419eecac98d4" exitCode=0 Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.700253 4990 generic.go:334] "Generic (PLEG): container finished" podID="5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d" containerID="fe46232d47a2817ca0f3b8f9049c81973cfa81de8d607b295f93e7368320d630" exitCode=0 Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.705734 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7599ccc789-q6ldt" podStartSLOduration=7.70572305 podStartE2EDuration="7.70572305s" podCreationTimestamp="2025-12-05 01:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 01:35:40.703845717 +0000 UTC m=+1639.080061078" watchObservedRunningTime="2025-12-05 01:35:40.70572305 +0000 UTC m=+1639.081938411" Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.708263 4990 generic.go:334] "Generic (PLEG): container finished" podID="2c7241f3-92bb-4295-97d9-4284784b11f3" containerID="327fbdffbbf5497e0035e561919c3378443e7b079ffc004906f3ba795a9a7d96" exitCode=0 Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.708283 4990 generic.go:334] "Generic (PLEG): container finished" podID="2c7241f3-92bb-4295-97d9-4284784b11f3" containerID="8b4a5c95a5bafa9fdced2d0ae48a57fdaa45a090e28899f2b4a2464fc1d5c263" exitCode=143 Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.711471 4990 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/novaapiea53-account-delete-m8d6d" secret="" err="secret \"galera-openstack-dockercfg-j8mtq\" not found" Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.711571 4990 scope.go:117] "RemoveContainer" containerID="2af88438a5f3d9524b6a0ed1c550ae8884033f86ecaf85f25e86861a398c0008" Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.712894 4990 generic.go:334] "Generic (PLEG): container finished" podID="10384219-030b-491b-884f-fd761eba4496" containerID="90907eab6e43a67e9dd116d95af526faeb9a66ef61e70f7e11881689a35c73d5" exitCode=0 Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.713827 4990 generic.go:334] "Generic (PLEG): container finished" podID="cd4b299f-9ab6-4714-b911-9b1e11708f39" containerID="1337738b96b97c494ef162bac005232edc2ea2057d25e1bca729e2912f0fc44b" exitCode=0 Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.718135 4990 generic.go:334] "Generic (PLEG): container finished" podID="ba3c2a5d-0bec-4905-8cba-d0e565643fe7" containerID="ddbc25bf62fa17b335529abe8efcf931bb13fc14cc17b4da59cbfeb8d6d41a3a" exitCode=0 Dec 05 01:35:40 crc kubenswrapper[4990]: E1205 01:35:40.750850 4990 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 01:35:40 crc kubenswrapper[4990]: E1205 01:35:40.750900 4990 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 01:35:40 crc kubenswrapper[4990]: E1205 01:35:40.750910 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ac1cabc4-d51d-43b6-8903-f098d13c1952-operator-scripts podName:ac1cabc4-d51d-43b6-8903-f098d13c1952 nodeName:}" failed. No retries permitted until 2025-12-05 01:35:41.250895058 +0000 UTC m=+1639.627110419 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/ac1cabc4-d51d-43b6-8903-f098d13c1952-operator-scripts") pod "placement06fa-account-delete-fb874" (UID: "ac1cabc4-d51d-43b6-8903-f098d13c1952") : configmap "openstack-scripts" not found Dec 05 01:35:40 crc kubenswrapper[4990]: E1205 01:35:40.750933 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d-operator-scripts podName:82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d nodeName:}" failed. No retries permitted until 2025-12-05 01:35:41.250924419 +0000 UTC m=+1639.627139780 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d-operator-scripts") pod "novaapiea53-account-delete-m8d6d" (UID: "82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d") : configmap "openstack-scripts" not found Dec 05 01:35:40 crc kubenswrapper[4990]: E1205 01:35:40.750957 4990 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 01:35:40 crc kubenswrapper[4990]: E1205 01:35:40.750976 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b325e8cb-5fb2-4543-ad3c-c9f42a4572f0-operator-scripts podName:b325e8cb-5fb2-4543-ad3c-c9f42a4572f0 nodeName:}" failed. No retries permitted until 2025-12-05 01:35:41.25097036 +0000 UTC m=+1639.627185721 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b325e8cb-5fb2-4543-ad3c-c9f42a4572f0-operator-scripts") pod "glancebd6b-account-delete-9bmrw" (UID: "b325e8cb-5fb2-4543-ad3c-c9f42a4572f0") : configmap "openstack-scripts" not found Dec 05 01:35:40 crc kubenswrapper[4990]: E1205 01:35:40.953871 4990 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 01:35:40 crc kubenswrapper[4990]: E1205 01:35:40.954156 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab630416-46f3-495f-92c2-732abce81632-operator-scripts podName:ab630416-46f3-495f-92c2-732abce81632 nodeName:}" failed. No retries permitted until 2025-12-05 01:35:42.95414049 +0000 UTC m=+1641.330355851 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/ab630416-46f3-495f-92c2-732abce81632-operator-scripts") pod "cindera4a4-account-delete-kpsxn" (UID: "ab630416-46f3-495f-92c2-732abce81632") : configmap "openstack-scripts" not found Dec 05 01:35:40 crc kubenswrapper[4990]: I1205 01:35:40.963337 4990 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="ed473a7a-f068-49a3-ae4c-b57b39e33b28" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.098692 4990 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="809c1920-3205-411c-a8c1-ed027b7e3b1f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Dec 05 01:35:41 crc kubenswrapper[4990]: E1205 01:35:41.159318 4990 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 01:35:41 crc kubenswrapper[4990]: E1205 01:35:41.159393 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c7bf2416-2722-4ab6-a022-32116155fa68-operator-scripts podName:c7bf2416-2722-4ab6-a022-32116155fa68 nodeName:}" failed. No retries permitted until 2025-12-05 01:35:42.159375208 +0000 UTC m=+1640.535590579 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/c7bf2416-2722-4ab6-a022-32116155fa68-operator-scripts") pod "novacell05b09-account-delete-wcpvd" (UID: "c7bf2416-2722-4ab6-a022-32116155fa68") : configmap "openstack-scripts" not found Dec 05 01:35:41 crc kubenswrapper[4990]: E1205 01:35:41.159768 4990 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 01:35:41 crc kubenswrapper[4990]: E1205 01:35:41.159856 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/64bbbfd0-59f8-4fb6-8761-503cdf8b9f36-operator-scripts podName:64bbbfd0-59f8-4fb6-8761-503cdf8b9f36 nodeName:}" failed. No retries permitted until 2025-12-05 01:35:42.159845691 +0000 UTC m=+1640.536061052 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/64bbbfd0-59f8-4fb6-8761-503cdf8b9f36-operator-scripts") pod "barbicandd52-account-delete-2dsms" (UID: "64bbbfd0-59f8-4fb6-8761-503cdf8b9f36") : configmap "openstack-scripts" not found Dec 05 01:35:41 crc kubenswrapper[4990]: E1205 01:35:41.261443 4990 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 01:35:41 crc kubenswrapper[4990]: E1205 01:35:41.261537 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d-operator-scripts podName:82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d nodeName:}" failed. No retries permitted until 2025-12-05 01:35:42.261519178 +0000 UTC m=+1640.637734539 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d-operator-scripts") pod "novaapiea53-account-delete-m8d6d" (UID: "82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d") : configmap "openstack-scripts" not found Dec 05 01:35:41 crc kubenswrapper[4990]: E1205 01:35:41.261767 4990 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 01:35:41 crc kubenswrapper[4990]: E1205 01:35:41.261772 4990 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 01:35:41 crc kubenswrapper[4990]: E1205 01:35:41.261871 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ac1cabc4-d51d-43b6-8903-f098d13c1952-operator-scripts podName:ac1cabc4-d51d-43b6-8903-f098d13c1952 nodeName:}" failed. No retries permitted until 2025-12-05 01:35:42.261852478 +0000 UTC m=+1640.638067839 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/ac1cabc4-d51d-43b6-8903-f098d13c1952-operator-scripts") pod "placement06fa-account-delete-fb874" (UID: "ac1cabc4-d51d-43b6-8903-f098d13c1952") : configmap "openstack-scripts" not found Dec 05 01:35:41 crc kubenswrapper[4990]: E1205 01:35:41.261907 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b325e8cb-5fb2-4543-ad3c-c9f42a4572f0-operator-scripts podName:b325e8cb-5fb2-4543-ad3c-c9f42a4572f0 nodeName:}" failed. No retries permitted until 2025-12-05 01:35:42.261899789 +0000 UTC m=+1640.638115150 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b325e8cb-5fb2-4543-ad3c-c9f42a4572f0-operator-scripts") pod "glancebd6b-account-delete-9bmrw" (UID: "b325e8cb-5fb2-4543-ad3c-c9f42a4572f0") : configmap "openstack-scripts" not found Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.608648 4990 scope.go:117] "RemoveContainer" containerID="16fbd3724b89f938c175ee9ce25a3436b47a2fa6ff90036a1c918e6369ba703e" Dec 05 01:35:41 crc kubenswrapper[4990]: E1205 01:35:41.621715 4990 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.686s" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.621823 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-78f948dd74-zmh7q" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.621850 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-rpg97"] Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.621868 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57775f7b86-mwzx9" event={"ID":"0cfb17a8-ecc2-4fa8-85e0-439d19b01b97","Type":"ContainerStarted","Data":"8bc4083818a23adcc4860fdba5de05f1178e309898e0aed33be3be52454aeccc"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.621898 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-rpg97"] Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.621948 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell05b09-account-delete-wcpvd" event={"ID":"c7bf2416-2722-4ab6-a022-32116155fa68","Type":"ContainerDied","Data":"2395f00493820a5ebb30a4c605fb8d5f23ada6746a45d56bd39a11683da2b3c3"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.621969 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cindera4a4-account-delete-kpsxn" event={"ID":"ab630416-46f3-495f-92c2-732abce81632","Type":"ContainerDied","Data":"78ad1f85b89a93447d310403e8048ded243b7fcf6bbb3b6bcfcc95d09d0ca2a1"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.621989 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"23fef2f1-b3e2-4d6f-8beb-efd01386d758","Type":"ContainerDied","Data":"99e3f4b0483634358a7d1235ce5eb8570a7f7ed07fa299cea8aa4652c97c14e8"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622004 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement06fa-account-delete-fb874" event={"ID":"ac1cabc4-d51d-43b6-8903-f098d13c1952","Type":"ContainerDied","Data":"7321a2542512be340cbfde5ad63b19280567b9288afa32dc396d30a73fafa0d8"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622020 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3","Type":"ContainerDied","Data":"5f490bc39eb1a824091b54123c3705eafc0ddd4d3bb92d574be2d6b179034a7a"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622037 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-bd6b-account-create-update-2mpnv"] Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622051 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3","Type":"ContainerDied","Data":"c8954ad9b082700a214ede2bb593871ca640480a4372bc2c794dbdcb9fbf4e60"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622063 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8954ad9b082700a214ede2bb593871ca640480a4372bc2c794dbdcb9fbf4e60" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622074 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"510e9e75-fc35-4bed-8e71-c6e27069f50a","Type":"ContainerDied","Data":"96311983bd4bbe76a84ad5addf79e1a778ba9e353f3375b6a5513195e6d9b82e"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622125 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"510e9e75-fc35-4bed-8e71-c6e27069f50a","Type":"ContainerDied","Data":"7f5e007262af24ec50449dee9aaf340a149590a735c49b2f98fbe9e7d8c82214"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622137 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f5e007262af24ec50449dee9aaf340a149590a735c49b2f98fbe9e7d8c82214" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622148 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glancebd6b-account-delete-9bmrw"] Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622169 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-bd6b-account-create-update-2mpnv"] Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622209 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-phsw5"] Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622224 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-phsw5"] Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622237 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-dd52-account-create-update-gk7b2"] Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622249 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-dd52-account-create-update-gk7b2"] Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622261 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbicandd52-account-delete-2dsms"] Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622275 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbicandd52-account-delete-2dsms" event={"ID":"64bbbfd0-59f8-4fb6-8761-503cdf8b9f36","Type":"ContainerDied","Data":"003addad4fd04e4704b5108d770c5e33bbf4691ed94949868baed6f57737b3e3"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622312 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutronc36c-account-delete-nt5gn" event={"ID":"0d7643ce-5dd7-48dc-9023-6502e5b0a05a","Type":"ContainerStarted","Data":"b6bf2bb125def428a4d480d2bbbd6133314c5ca3fe6fc81390c342688ce6ce20"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622329 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glancebd6b-account-delete-9bmrw" event={"ID":"b325e8cb-5fb2-4543-ad3c-c9f42a4572f0","Type":"ContainerStarted","Data":"f32cb51b220cae56d356be4dd5d1fa30a40674bac0da14eb72c6e3c748fd23b4"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622341 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7599ccc789-q6ldt" event={"ID":"e94da38c-b2d3-4ddb-b032-a6e5bfa62145","Type":"ContainerStarted","Data":"cc5093cdab379f16035c70128d1dfeb6ff51bf6e5e36c4ec7ac5f5e887cecd99"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622356 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c5f858c6d-zxwsh" event={"ID":"82eb03c9-869c-447d-9b78-b4ef916b59ac","Type":"ContainerDied","Data":"9b75c4c6401f15bf4e7fe79181580337ec0abfc49e524320e6166604b34401d8"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622370 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b75c4c6401f15bf4e7fe79181580337ec0abfc49e524320e6166604b34401d8" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622381 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8cbf17b-4408-40ea-81bd-c70478cf6095","Type":"ContainerDied","Data":"60607d382cd8bda26a5778bed70f82be69af7e7f24f195984c6f642727d62e2c"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622395 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8cbf17b-4408-40ea-81bd-c70478cf6095","Type":"ContainerDied","Data":"72b6662721b483f07b892ef1907c45dc83be4d09b4ac2d3ef321bc8da7ab9d10"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622406 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8cbf17b-4408-40ea-81bd-c70478cf6095","Type":"ContainerDied","Data":"31ce6ea891092f920fafd58685b5970d0c8960a1faf1c70db62854e2178e153a"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622418 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-4hxkr"] Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622433 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74bb84bc86-8krlf" event={"ID":"4489a490-bacc-498c-b0e3-d6b5cad13d34","Type":"ContainerDied","Data":"6ddb956e1bc6923210a8635165e707d600786748db6d8f83199e419eecac98d4"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622451 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-4hxkr"] Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622465 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-5b09-account-create-update-prgtc"] Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622496 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-5b09-account-create-update-prgtc"] Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622511 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell05b09-account-delete-wcpvd"] Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622525 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-d2rxf"] Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622536 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-d2rxf"] Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622550 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-ea53-account-create-update-zvnfg"] Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622562 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d","Type":"ContainerDied","Data":"fe46232d47a2817ca0f3b8f9049c81973cfa81de8d607b295f93e7368320d630"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622578 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapiea53-account-delete-m8d6d"] Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622596 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-ea53-account-create-update-zvnfg"] Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622610 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78f948dd74-zmh7q" event={"ID":"2c7241f3-92bb-4295-97d9-4284784b11f3","Type":"ContainerDied","Data":"327fbdffbbf5497e0035e561919c3378443e7b079ffc004906f3ba795a9a7d96"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622624 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78f948dd74-zmh7q" event={"ID":"2c7241f3-92bb-4295-97d9-4284784b11f3","Type":"ContainerDied","Data":"8b4a5c95a5bafa9fdced2d0ae48a57fdaa45a090e28899f2b4a2464fc1d5c263"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622643 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapiea53-account-delete-m8d6d" event={"ID":"82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d","Type":"ContainerStarted","Data":"2af88438a5f3d9524b6a0ed1c550ae8884033f86ecaf85f25e86861a398c0008"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622656 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"10384219-030b-491b-884f-fd761eba4496","Type":"ContainerDied","Data":"90907eab6e43a67e9dd116d95af526faeb9a66ef61e70f7e11881689a35c73d5"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622670 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"10384219-030b-491b-884f-fd761eba4496","Type":"ContainerDied","Data":"45c0a23118b8b647776826f93bf7d1d16c5b1174ed4446ecd18b09ed67ea9dd3"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622681 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45c0a23118b8b647776826f93bf7d1d16c5b1174ed4446ecd18b09ed67ea9dd3" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622698 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"cd4b299f-9ab6-4714-b911-9b1e11708f39","Type":"ContainerDied","Data":"1337738b96b97c494ef162bac005232edc2ea2057d25e1bca729e2912f0fc44b"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.622711 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ba3c2a5d-0bec-4905-8cba-d0e565643fe7","Type":"ContainerDied","Data":"ddbc25bf62fa17b335529abe8efcf931bb13fc14cc17b4da59cbfeb8d6d41a3a"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.635438 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c5f858c6d-zxwsh" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.644941 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.650101 4990 scope.go:117] "RemoveContainer" containerID="7a443fba291ecdcac0e6448acb29bbf33b4870fc7e1f4e55df79dc991c5a9578" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.702757 4990 scope.go:117] "RemoveContainer" containerID="16fbd3724b89f938c175ee9ce25a3436b47a2fa6ff90036a1c918e6369ba703e" Dec 05 01:35:41 crc kubenswrapper[4990]: E1205 01:35:41.703926 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16fbd3724b89f938c175ee9ce25a3436b47a2fa6ff90036a1c918e6369ba703e\": container with ID starting with 16fbd3724b89f938c175ee9ce25a3436b47a2fa6ff90036a1c918e6369ba703e not found: ID does not exist" containerID="16fbd3724b89f938c175ee9ce25a3436b47a2fa6ff90036a1c918e6369ba703e" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.704034 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16fbd3724b89f938c175ee9ce25a3436b47a2fa6ff90036a1c918e6369ba703e"} err="failed to get container status \"16fbd3724b89f938c175ee9ce25a3436b47a2fa6ff90036a1c918e6369ba703e\": rpc error: code = NotFound desc = could not find container \"16fbd3724b89f938c175ee9ce25a3436b47a2fa6ff90036a1c918e6369ba703e\": container with ID starting with 16fbd3724b89f938c175ee9ce25a3436b47a2fa6ff90036a1c918e6369ba703e not found: ID does not exist" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.704139 4990 scope.go:117] "RemoveContainer" containerID="7a443fba291ecdcac0e6448acb29bbf33b4870fc7e1f4e55df79dc991c5a9578" Dec 05 01:35:41 crc kubenswrapper[4990]: E1205 01:35:41.706531 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a443fba291ecdcac0e6448acb29bbf33b4870fc7e1f4e55df79dc991c5a9578\": container with ID starting with 7a443fba291ecdcac0e6448acb29bbf33b4870fc7e1f4e55df79dc991c5a9578 not found: ID does not exist" containerID="7a443fba291ecdcac0e6448acb29bbf33b4870fc7e1f4e55df79dc991c5a9578" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.706630 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a443fba291ecdcac0e6448acb29bbf33b4870fc7e1f4e55df79dc991c5a9578"} err="failed to get container status \"7a443fba291ecdcac0e6448acb29bbf33b4870fc7e1f4e55df79dc991c5a9578\": rpc error: code = NotFound desc = could not find container \"7a443fba291ecdcac0e6448acb29bbf33b4870fc7e1f4e55df79dc991c5a9578\": container with ID starting with 7a443fba291ecdcac0e6448acb29bbf33b4870fc7e1f4e55df79dc991c5a9578 not found: ID does not exist" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.713884 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-84997d8dc-hzdlp"] Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.714320 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.723656 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-84997d8dc-hzdlp"] Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.731847 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"23fef2f1-b3e2-4d6f-8beb-efd01386d758","Type":"ContainerDied","Data":"145e5a4dc36d1dfbe72961ae42699553ff038cc54c52c7feca0037e661d14b43"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.731886 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="145e5a4dc36d1dfbe72961ae42699553ff038cc54c52c7feca0037e661d14b43" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.752430 4990 generic.go:334] "Generic (PLEG): container finished" podID="0dc80822-8cd5-4004-abdd-160ad6dcdd72" containerID="8eb62300cc3ccbd37e11d39589f93dfecbfed82d1d1d22eb835f940823d41073" exitCode=0 Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.752516 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0dc80822-8cd5-4004-abdd-160ad6dcdd72","Type":"ContainerDied","Data":"8eb62300cc3ccbd37e11d39589f93dfecbfed82d1d1d22eb835f940823d41073"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.752549 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0dc80822-8cd5-4004-abdd-160ad6dcdd72","Type":"ContainerDied","Data":"d8e75cc6e8c2c7a97855fc96c167bf14c5d22c4bb98a4b0e536dd698e0d33804"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.752563 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8e75cc6e8c2c7a97855fc96c167bf14c5d22c4bb98a4b0e536dd698e0d33804" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.754103 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.756690 4990 generic.go:334] "Generic (PLEG): container finished" podID="0cfb17a8-ecc2-4fa8-85e0-439d19b01b97" containerID="3fea20119ef314b71415f0e498137e3f7e618ac787a6fbcf0557233df608d6ed" exitCode=143 Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.756768 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57775f7b86-mwzx9" event={"ID":"0cfb17a8-ecc2-4fa8-85e0-439d19b01b97","Type":"ContainerDied","Data":"3fea20119ef314b71415f0e498137e3f7e618ac787a6fbcf0557233df608d6ed"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.761281 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78f948dd74-zmh7q" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.762294 4990 generic.go:334] "Generic (PLEG): container finished" podID="82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d" containerID="2af88438a5f3d9524b6a0ed1c550ae8884033f86ecaf85f25e86861a398c0008" exitCode=1 Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.762334 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapiea53-account-delete-m8d6d" event={"ID":"82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d","Type":"ContainerDied","Data":"2af88438a5f3d9524b6a0ed1c550ae8884033f86ecaf85f25e86861a398c0008"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.768787 4990 generic.go:334] "Generic (PLEG): container finished" podID="b325e8cb-5fb2-4543-ad3c-c9f42a4572f0" containerID="f32cb51b220cae56d356be4dd5d1fa30a40674bac0da14eb72c6e3c748fd23b4" exitCode=1 Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.768934 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glancebd6b-account-delete-9bmrw" event={"ID":"b325e8cb-5fb2-4543-ad3c-c9f42a4572f0","Type":"ContainerDied","Data":"f32cb51b220cae56d356be4dd5d1fa30a40674bac0da14eb72c6e3c748fd23b4"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.769191 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.778215 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5vct\" (UniqueName: \"kubernetes.io/projected/510e9e75-fc35-4bed-8e71-c6e27069f50a-kube-api-access-d5vct\") pod \"510e9e75-fc35-4bed-8e71-c6e27069f50a\" (UID: \"510e9e75-fc35-4bed-8e71-c6e27069f50a\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.778255 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/510e9e75-fc35-4bed-8e71-c6e27069f50a-combined-ca-bundle\") pod \"510e9e75-fc35-4bed-8e71-c6e27069f50a\" (UID: \"510e9e75-fc35-4bed-8e71-c6e27069f50a\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.778281 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82eb03c9-869c-447d-9b78-b4ef916b59ac-combined-ca-bundle\") pod \"82eb03c9-869c-447d-9b78-b4ef916b59ac\" (UID: \"82eb03c9-869c-447d-9b78-b4ef916b59ac\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.778302 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82eb03c9-869c-447d-9b78-b4ef916b59ac-config-data\") pod \"82eb03c9-869c-447d-9b78-b4ef916b59ac\" (UID: \"82eb03c9-869c-447d-9b78-b4ef916b59ac\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.778328 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82eb03c9-869c-447d-9b78-b4ef916b59ac-internal-tls-certs\") pod \"82eb03c9-869c-447d-9b78-b4ef916b59ac\" (UID: \"82eb03c9-869c-447d-9b78-b4ef916b59ac\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.778347 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/510e9e75-fc35-4bed-8e71-c6e27069f50a-kube-state-metrics-tls-certs\") pod \"510e9e75-fc35-4bed-8e71-c6e27069f50a\" (UID: \"510e9e75-fc35-4bed-8e71-c6e27069f50a\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.778386 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82eb03c9-869c-447d-9b78-b4ef916b59ac-public-tls-certs\") pod \"82eb03c9-869c-447d-9b78-b4ef916b59ac\" (UID: \"82eb03c9-869c-447d-9b78-b4ef916b59ac\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.778413 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f4rr\" (UniqueName: \"kubernetes.io/projected/82eb03c9-869c-447d-9b78-b4ef916b59ac-kube-api-access-2f4rr\") pod \"82eb03c9-869c-447d-9b78-b4ef916b59ac\" (UID: \"82eb03c9-869c-447d-9b78-b4ef916b59ac\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.778473 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82eb03c9-869c-447d-9b78-b4ef916b59ac-scripts\") pod \"82eb03c9-869c-447d-9b78-b4ef916b59ac\" (UID: \"82eb03c9-869c-447d-9b78-b4ef916b59ac\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.778551 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/510e9e75-fc35-4bed-8e71-c6e27069f50a-kube-state-metrics-tls-config\") pod \"510e9e75-fc35-4bed-8e71-c6e27069f50a\" (UID: \"510e9e75-fc35-4bed-8e71-c6e27069f50a\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.778603 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82eb03c9-869c-447d-9b78-b4ef916b59ac-logs\") pod \"82eb03c9-869c-447d-9b78-b4ef916b59ac\" (UID: \"82eb03c9-869c-447d-9b78-b4ef916b59ac\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.779378 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82eb03c9-869c-447d-9b78-b4ef916b59ac-logs" (OuterVolumeSpecName: "logs") pod "82eb03c9-869c-447d-9b78-b4ef916b59ac" (UID: "82eb03c9-869c-447d-9b78-b4ef916b59ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.791632 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.791875 4990 generic.go:334] "Generic (PLEG): container finished" podID="e94da38c-b2d3-4ddb-b032-a6e5bfa62145" containerID="4ebf1c229577d1f3c048b69998d4a50fa699eca8648698967f1adc59a1ef23d9" exitCode=143 Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.791985 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7599ccc789-q6ldt" event={"ID":"e94da38c-b2d3-4ddb-b032-a6e5bfa62145","Type":"ContainerDied","Data":"4ebf1c229577d1f3c048b69998d4a50fa699eca8648698967f1adc59a1ef23d9"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.794615 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"cd4b299f-9ab6-4714-b911-9b1e11708f39","Type":"ContainerDied","Data":"6f449a97865ee5ddd5585e5ec55ce64c49a49a43bfd44e17528c85dcce29ecf7"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.794645 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f449a97865ee5ddd5585e5ec55ce64c49a49a43bfd44e17528c85dcce29ecf7" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.794718 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74bb84bc86-8krlf" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.796554 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/510e9e75-fc35-4bed-8e71-c6e27069f50a-kube-api-access-d5vct" (OuterVolumeSpecName: "kube-api-access-d5vct") pod "510e9e75-fc35-4bed-8e71-c6e27069f50a" (UID: "510e9e75-fc35-4bed-8e71-c6e27069f50a"). InnerVolumeSpecName "kube-api-access-d5vct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.798470 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82eb03c9-869c-447d-9b78-b4ef916b59ac-scripts" (OuterVolumeSpecName: "scripts") pod "82eb03c9-869c-447d-9b78-b4ef916b59ac" (UID: "82eb03c9-869c-447d-9b78-b4ef916b59ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.798850 4990 generic.go:334] "Generic (PLEG): container finished" podID="4b5ac2be-fc48-4bde-a668-b3549462a101" containerID="95871264dddedee0223bf43470e710503cc4d9eb3d22d3ebef3d08484f77a4e6" exitCode=0 Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.798910 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b5ac2be-fc48-4bde-a668-b3549462a101","Type":"ContainerDied","Data":"95871264dddedee0223bf43470e710503cc4d9eb3d22d3ebef3d08484f77a4e6"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.798936 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b5ac2be-fc48-4bde-a668-b3549462a101","Type":"ContainerDied","Data":"2ad1ba201f2b8f8454af2e5dd5cdffefad320a749d0a4a06f79652a2ccdc329c"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.798947 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ad1ba201f2b8f8454af2e5dd5cdffefad320a749d0a4a06f79652a2ccdc329c" Dec 05 01:35:41 crc kubenswrapper[4990]: E1205 01:35:41.806202 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="77eda99de79f1606c252cb06ece67f2f6f226ccf89000f0de068f41aaab2a00c" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.814574 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82eb03c9-869c-447d-9b78-b4ef916b59ac-kube-api-access-2f4rr" (OuterVolumeSpecName: "kube-api-access-2f4rr") pod "82eb03c9-869c-447d-9b78-b4ef916b59ac" (UID: "82eb03c9-869c-447d-9b78-b4ef916b59ac"). InnerVolumeSpecName "kube-api-access-2f4rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:41 crc kubenswrapper[4990]: E1205 01:35:41.818722 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="77eda99de79f1606c252cb06ece67f2f6f226ccf89000f0de068f41aaab2a00c" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 05 01:35:41 crc kubenswrapper[4990]: E1205 01:35:41.820717 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="77eda99de79f1606c252cb06ece67f2f6f226ccf89000f0de068f41aaab2a00c" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 05 01:35:41 crc kubenswrapper[4990]: E1205 01:35:41.820748 4990 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="1847f2cb-e2fb-4dc0-8f4b-bf6e43212454" containerName="ovn-northd" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.820907 4990 generic.go:334] "Generic (PLEG): container finished" podID="0d7643ce-5dd7-48dc-9023-6502e5b0a05a" containerID="b6bf2bb125def428a4d480d2bbbd6133314c5ca3fe6fc81390c342688ce6ce20" exitCode=1 Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.821003 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutronc36c-account-delete-nt5gn" event={"ID":"0d7643ce-5dd7-48dc-9023-6502e5b0a05a","Type":"ContainerDied","Data":"b6bf2bb125def428a4d480d2bbbd6133314c5ca3fe6fc81390c342688ce6ce20"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.821056 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutronc36c-account-delete-nt5gn" event={"ID":"0d7643ce-5dd7-48dc-9023-6502e5b0a05a","Type":"ContainerDied","Data":"c65a15b3e0d4a2a9e3d5963bf83e644314077700d7427904f9b97f6e5c076851"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.821068 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c65a15b3e0d4a2a9e3d5963bf83e644314077700d7427904f9b97f6e5c076851" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.833217 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.834557 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d","Type":"ContainerDied","Data":"7ab3c9ce6977bb1403dbb8e5401d5aca03aed36fc403f684f23b40dd751bf40a"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.834621 4990 scope.go:117] "RemoveContainer" containerID="fe46232d47a2817ca0f3b8f9049c81973cfa81de8d607b295f93e7368320d630" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.865922 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78f948dd74-zmh7q" event={"ID":"2c7241f3-92bb-4295-97d9-4284784b11f3","Type":"ContainerDied","Data":"257b886219f341cd87d2cad98bf72ca9cfa0560f415b9de2c61e1e4cd8739785"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.866009 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78f948dd74-zmh7q" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.877840 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74bb84bc86-8krlf" event={"ID":"4489a490-bacc-498c-b0e3-d6b5cad13d34","Type":"ContainerDied","Data":"d7257257bf3dcc51f9892693e78899007d651694cccc8d56e89312f1dcad3bbc"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.877925 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74bb84bc86-8krlf" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.879671 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-internal-tls-certs\") pod \"10384219-030b-491b-884f-fd761eba4496\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.879706 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-combined-ca-bundle\") pod \"2c7241f3-92bb-4295-97d9-4284784b11f3\" (UID: \"2c7241f3-92bb-4295-97d9-4284784b11f3\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.879738 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx8xp\" (UniqueName: \"kubernetes.io/projected/2c7241f3-92bb-4295-97d9-4284784b11f3-kube-api-access-hx8xp\") pod \"2c7241f3-92bb-4295-97d9-4284784b11f3\" (UID: \"2c7241f3-92bb-4295-97d9-4284784b11f3\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.879773 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba3c2a5d-0bec-4905-8cba-d0e565643fe7-logs\") pod \"ba3c2a5d-0bec-4905-8cba-d0e565643fe7\" (UID: \"ba3c2a5d-0bec-4905-8cba-d0e565643fe7\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.879797 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-combined-ca-bundle\") pod \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\" (UID: \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.879812 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-internal-tls-certs\") pod \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\" (UID: \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.879836 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-internal-tls-certs\") pod \"2c7241f3-92bb-4295-97d9-4284784b11f3\" (UID: \"2c7241f3-92bb-4295-97d9-4284784b11f3\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.879867 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-combined-ca-bundle\") pod \"10384219-030b-491b-884f-fd761eba4496\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.879911 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba3c2a5d-0bec-4905-8cba-d0e565643fe7-combined-ca-bundle\") pod \"ba3c2a5d-0bec-4905-8cba-d0e565643fe7\" (UID: \"ba3c2a5d-0bec-4905-8cba-d0e565643fe7\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.879946 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq282\" (UniqueName: \"kubernetes.io/projected/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-kube-api-access-kq282\") pod \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\" (UID: \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.879964 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-public-tls-certs\") pod \"2c7241f3-92bb-4295-97d9-4284784b11f3\" (UID: \"2c7241f3-92bb-4295-97d9-4284784b11f3\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.879991 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c7241f3-92bb-4295-97d9-4284784b11f3-logs\") pod \"2c7241f3-92bb-4295-97d9-4284784b11f3\" (UID: \"2c7241f3-92bb-4295-97d9-4284784b11f3\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.880015 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba3c2a5d-0bec-4905-8cba-d0e565643fe7-config-data\") pod \"ba3c2a5d-0bec-4905-8cba-d0e565643fe7\" (UID: \"ba3c2a5d-0bec-4905-8cba-d0e565643fe7\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.880053 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-config-data-custom\") pod \"2c7241f3-92bb-4295-97d9-4284784b11f3\" (UID: \"2c7241f3-92bb-4295-97d9-4284784b11f3\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.880072 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-httpd-run\") pod \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\" (UID: \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.880088 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8v2q\" (UniqueName: \"kubernetes.io/projected/10384219-030b-491b-884f-fd761eba4496-kube-api-access-w8v2q\") pod \"10384219-030b-491b-884f-fd761eba4496\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.880104 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-logs\") pod \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\" (UID: \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.880142 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\" (UID: \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.880168 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-config-data\") pod \"2c7241f3-92bb-4295-97d9-4284784b11f3\" (UID: \"2c7241f3-92bb-4295-97d9-4284784b11f3\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.880188 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6l7m\" (UniqueName: \"kubernetes.io/projected/ba3c2a5d-0bec-4905-8cba-d0e565643fe7-kube-api-access-m6l7m\") pod \"ba3c2a5d-0bec-4905-8cba-d0e565643fe7\" (UID: \"ba3c2a5d-0bec-4905-8cba-d0e565643fe7\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.880229 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10384219-030b-491b-884f-fd761eba4496-logs\") pod \"10384219-030b-491b-884f-fd761eba4496\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.880246 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-scripts\") pod \"10384219-030b-491b-884f-fd761eba4496\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.880263 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-scripts\") pod \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\" (UID: \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.880299 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-config-data\") pod \"10384219-030b-491b-884f-fd761eba4496\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.880319 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba3c2a5d-0bec-4905-8cba-d0e565643fe7-nova-metadata-tls-certs\") pod \"ba3c2a5d-0bec-4905-8cba-d0e565643fe7\" (UID: \"ba3c2a5d-0bec-4905-8cba-d0e565643fe7\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.880338 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-public-tls-certs\") pod \"10384219-030b-491b-884f-fd761eba4496\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.880358 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10384219-030b-491b-884f-fd761eba4496-etc-machine-id\") pod \"10384219-030b-491b-884f-fd761eba4496\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.880372 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-config-data-custom\") pod \"10384219-030b-491b-884f-fd761eba4496\" (UID: \"10384219-030b-491b-884f-fd761eba4496\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.880394 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-config-data\") pod \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\" (UID: \"47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.880789 4990 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82eb03c9-869c-447d-9b78-b4ef916b59ac-logs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.880806 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5vct\" (UniqueName: \"kubernetes.io/projected/510e9e75-fc35-4bed-8e71-c6e27069f50a-kube-api-access-d5vct\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.880819 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2f4rr\" (UniqueName: \"kubernetes.io/projected/82eb03c9-869c-447d-9b78-b4ef916b59ac-kube-api-access-2f4rr\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.880827 4990 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82eb03c9-869c-447d-9b78-b4ef916b59ac-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.881623 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10384219-030b-491b-884f-fd761eba4496-logs" (OuterVolumeSpecName: "logs") pod "10384219-030b-491b-884f-fd761eba4496" (UID: "10384219-030b-491b-884f-fd761eba4496"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.881877 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3" (UID: "47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.882258 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-logs" (OuterVolumeSpecName: "logs") pod "47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3" (UID: "47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.884874 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c5f858c6d-zxwsh" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.885286 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.885435 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ba3c2a5d-0bec-4905-8cba-d0e565643fe7","Type":"ContainerDied","Data":"89702e12cd0b1fc59fed2d582dd42159a3c5e067dea8924383f1f91d9cf11400"} Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.885511 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.885658 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.892386 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba3c2a5d-0bec-4905-8cba-d0e565643fe7-logs" (OuterVolumeSpecName: "logs") pod "ba3c2a5d-0bec-4905-8cba-d0e565643fe7" (UID: "ba3c2a5d-0bec-4905-8cba-d0e565643fe7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.892784 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10384219-030b-491b-884f-fd761eba4496-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "10384219-030b-491b-884f-fd761eba4496" (UID: "10384219-030b-491b-884f-fd761eba4496"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.892962 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c7241f3-92bb-4295-97d9-4284784b11f3-logs" (OuterVolumeSpecName: "logs") pod "2c7241f3-92bb-4295-97d9-4284784b11f3" (UID: "2c7241f3-92bb-4295-97d9-4284784b11f3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.930004 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10384219-030b-491b-884f-fd761eba4496-kube-api-access-w8v2q" (OuterVolumeSpecName: "kube-api-access-w8v2q") pod "10384219-030b-491b-884f-fd761eba4496" (UID: "10384219-030b-491b-884f-fd761eba4496"). InnerVolumeSpecName "kube-api-access-w8v2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.933419 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3" (UID: "47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.936853 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-scripts" (OuterVolumeSpecName: "scripts") pod "10384219-030b-491b-884f-fd761eba4496" (UID: "10384219-030b-491b-884f-fd761eba4496"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.940249 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="187debb7-c09c-43ee-b6bf-263a1df1d4e0" path="/var/lib/kubelet/pods/187debb7-c09c-43ee-b6bf-263a1df1d4e0/volumes" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.940874 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d400066-a9cb-4663-a398-9b2dfdeba85e" path="/var/lib/kubelet/pods/2d400066-a9cb-4663-a398-9b2dfdeba85e/volumes" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.941395 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31b24fcb-1091-4e53-95d0-b1133f1a9b92" path="/var/lib/kubelet/pods/31b24fcb-1091-4e53-95d0-b1133f1a9b92/volumes" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.941929 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5" path="/var/lib/kubelet/pods/3aa3ad14-f415-4ccf-8b40-3e69eeadfbb5/volumes" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.943277 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61f41429-2e5c-4fa2-adca-fc89ad4f4175" path="/var/lib/kubelet/pods/61f41429-2e5c-4fa2-adca-fc89ad4f4175/volumes" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.943790 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-scripts" (OuterVolumeSpecName: "scripts") pod "47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3" (UID: "47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.944170 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72c761da-2168-4966-b204-cddef6555a72" path="/var/lib/kubelet/pods/72c761da-2168-4966-b204-cddef6555a72/volumes" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.947892 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95552cfe-d576-4654-af71-fcd3c3c983ab" path="/var/lib/kubelet/pods/95552cfe-d576-4654-af71-fcd3c3c983ab/volumes" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.949033 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb029546-9d20-445a-9926-2a43c235a755" path="/var/lib/kubelet/pods/bb029546-9d20-445a-9926-2a43c235a755/volumes" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.950129 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f690712b-647d-455f-af3b-adbe25e2662d" path="/var/lib/kubelet/pods/f690712b-647d-455f-af3b-adbe25e2662d/volumes" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.957522 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2c7241f3-92bb-4295-97d9-4284784b11f3" (UID: "2c7241f3-92bb-4295-97d9-4284784b11f3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.960188 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c7241f3-92bb-4295-97d9-4284784b11f3-kube-api-access-hx8xp" (OuterVolumeSpecName: "kube-api-access-hx8xp") pod "2c7241f3-92bb-4295-97d9-4284784b11f3" (UID: "2c7241f3-92bb-4295-97d9-4284784b11f3"). InnerVolumeSpecName "kube-api-access-hx8xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.960889 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "10384219-030b-491b-884f-fd761eba4496" (UID: "10384219-030b-491b-884f-fd761eba4496"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.960966 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-kube-api-access-kq282" (OuterVolumeSpecName: "kube-api-access-kq282") pod "47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3" (UID: "47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3"). InnerVolumeSpecName "kube-api-access-kq282". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.965271 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba3c2a5d-0bec-4905-8cba-d0e565643fe7-kube-api-access-m6l7m" (OuterVolumeSpecName: "kube-api-access-m6l7m") pod "ba3c2a5d-0bec-4905-8cba-d0e565643fe7" (UID: "ba3c2a5d-0bec-4905-8cba-d0e565643fe7"). InnerVolumeSpecName "kube-api-access-m6l7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.981375 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4489a490-bacc-498c-b0e3-d6b5cad13d34-config-data\") pod \"4489a490-bacc-498c-b0e3-d6b5cad13d34\" (UID: \"4489a490-bacc-498c-b0e3-d6b5cad13d34\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.981411 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4489a490-bacc-498c-b0e3-d6b5cad13d34-public-tls-certs\") pod \"4489a490-bacc-498c-b0e3-d6b5cad13d34\" (UID: \"4489a490-bacc-498c-b0e3-d6b5cad13d34\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.981449 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4489a490-bacc-498c-b0e3-d6b5cad13d34-logs\") pod \"4489a490-bacc-498c-b0e3-d6b5cad13d34\" (UID: \"4489a490-bacc-498c-b0e3-d6b5cad13d34\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.981522 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\" (UID: \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.981554 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-scripts\") pod \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\" (UID: \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.981608 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4489a490-bacc-498c-b0e3-d6b5cad13d34-combined-ca-bundle\") pod \"4489a490-bacc-498c-b0e3-d6b5cad13d34\" (UID: \"4489a490-bacc-498c-b0e3-d6b5cad13d34\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.981694 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4489a490-bacc-498c-b0e3-d6b5cad13d34-internal-tls-certs\") pod \"4489a490-bacc-498c-b0e3-d6b5cad13d34\" (UID: \"4489a490-bacc-498c-b0e3-d6b5cad13d34\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.983829 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4489a490-bacc-498c-b0e3-d6b5cad13d34-logs" (OuterVolumeSpecName: "logs") pod "4489a490-bacc-498c-b0e3-d6b5cad13d34" (UID: "4489a490-bacc-498c-b0e3-d6b5cad13d34"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.986888 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-httpd-run\") pod \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\" (UID: \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.987160 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d" (UID: "5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.987402 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4489a490-bacc-498c-b0e3-d6b5cad13d34-config-data-custom\") pod \"4489a490-bacc-498c-b0e3-d6b5cad13d34\" (UID: \"4489a490-bacc-498c-b0e3-d6b5cad13d34\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.987472 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-logs\") pod \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\" (UID: \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.987508 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-public-tls-certs\") pod \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\" (UID: \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.987524 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-config-data\") pod \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\" (UID: \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.987551 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwwxq\" (UniqueName: \"kubernetes.io/projected/4489a490-bacc-498c-b0e3-d6b5cad13d34-kube-api-access-vwwxq\") pod \"4489a490-bacc-498c-b0e3-d6b5cad13d34\" (UID: \"4489a490-bacc-498c-b0e3-d6b5cad13d34\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.987580 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqrfp\" (UniqueName: \"kubernetes.io/projected/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-kube-api-access-jqrfp\") pod \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\" (UID: \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.987610 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-combined-ca-bundle\") pod \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\" (UID: \"5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d\") " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.988047 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-logs" (OuterVolumeSpecName: "logs") pod "5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d" (UID: "5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.989080 4990 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10384219-030b-491b-884f-fd761eba4496-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.989110 4990 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.989124 4990 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4489a490-bacc-498c-b0e3-d6b5cad13d34-logs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.989136 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hx8xp\" (UniqueName: \"kubernetes.io/projected/2c7241f3-92bb-4295-97d9-4284784b11f3-kube-api-access-hx8xp\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.989149 4990 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba3c2a5d-0bec-4905-8cba-d0e565643fe7-logs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.989162 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq282\" (UniqueName: \"kubernetes.io/projected/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-kube-api-access-kq282\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.989172 4990 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c7241f3-92bb-4295-97d9-4284784b11f3-logs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.989185 4990 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.989195 4990 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.989207 4990 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.989218 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8v2q\" (UniqueName: \"kubernetes.io/projected/10384219-030b-491b-884f-fd761eba4496-kube-api-access-w8v2q\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.989230 4990 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-logs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.989254 4990 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.989266 4990 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-logs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.989280 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6l7m\" (UniqueName: \"kubernetes.io/projected/ba3c2a5d-0bec-4905-8cba-d0e565643fe7-kube-api-access-m6l7m\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.989291 4990 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10384219-030b-491b-884f-fd761eba4496-logs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.989629 4990 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:41 crc kubenswrapper[4990]: I1205 01:35:41.989649 4990 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:41 crc kubenswrapper[4990]: E1205 01:35:41.997619 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4c1a4d82fc529e57f60fa033b1bfcb08aa397f1149fd6f70c841640e8abdbde8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 05 01:35:42 crc kubenswrapper[4990]: E1205 01:35:41.999411 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4c1a4d82fc529e57f60fa033b1bfcb08aa397f1149fd6f70c841640e8abdbde8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 05 01:35:42 crc kubenswrapper[4990]: E1205 01:35:42.010601 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4c1a4d82fc529e57f60fa033b1bfcb08aa397f1149fd6f70c841640e8abdbde8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 05 01:35:42 crc kubenswrapper[4990]: E1205 01:35:42.010678 4990 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="426a0569-3dcd-4f28-9556-d4be5f1bdc18" containerName="nova-cell0-conductor-conductor" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.028218 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4489a490-bacc-498c-b0e3-d6b5cad13d34-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4489a490-bacc-498c-b0e3-d6b5cad13d34" (UID: "4489a490-bacc-498c-b0e3-d6b5cad13d34"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.033151 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4489a490-bacc-498c-b0e3-d6b5cad13d34-kube-api-access-vwwxq" (OuterVolumeSpecName: "kube-api-access-vwwxq") pod "4489a490-bacc-498c-b0e3-d6b5cad13d34" (UID: "4489a490-bacc-498c-b0e3-d6b5cad13d34"). InnerVolumeSpecName "kube-api-access-vwwxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.034185 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d" (UID: "5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.034204 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-scripts" (OuterVolumeSpecName: "scripts") pod "5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d" (UID: "5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.034215 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-kube-api-access-jqrfp" (OuterVolumeSpecName: "kube-api-access-jqrfp") pod "5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d" (UID: "5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d"). InnerVolumeSpecName "kube-api-access-jqrfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:42 crc kubenswrapper[4990]: E1205 01:35:42.077208 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6290c870625c2bc24d2bbf7c61e3acaf0e8d3f3f7f3e22832b84d2f5bf16b234 is running failed: container process not found" containerID="6290c870625c2bc24d2bbf7c61e3acaf0e8d3f3f7f3e22832b84d2f5bf16b234" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 05 01:35:42 crc kubenswrapper[4990]: E1205 01:35:42.077473 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6290c870625c2bc24d2bbf7c61e3acaf0e8d3f3f7f3e22832b84d2f5bf16b234 is running failed: container process not found" containerID="6290c870625c2bc24d2bbf7c61e3acaf0e8d3f3f7f3e22832b84d2f5bf16b234" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.091585 4990 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.092114 4990 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.092183 4990 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4489a490-bacc-498c-b0e3-d6b5cad13d34-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:42 crc kubenswrapper[4990]: E1205 01:35:42.091738 4990 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Dec 05 01:35:42 crc kubenswrapper[4990]: E1205 01:35:42.092332 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed473a7a-f068-49a3-ae4c-b57b39e33b28-config-data podName:ed473a7a-f068-49a3-ae4c-b57b39e33b28 nodeName:}" failed. No retries permitted until 2025-12-05 01:35:50.092309058 +0000 UTC m=+1648.468524459 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/ed473a7a-f068-49a3-ae4c-b57b39e33b28-config-data") pod "rabbitmq-cell1-server-0" (UID: "ed473a7a-f068-49a3-ae4c-b57b39e33b28") : configmap "rabbitmq-cell1-config-data" not found Dec 05 01:35:42 crc kubenswrapper[4990]: E1205 01:35:42.091649 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6290c870625c2bc24d2bbf7c61e3acaf0e8d3f3f7f3e22832b84d2f5bf16b234 is running failed: container process not found" containerID="6290c870625c2bc24d2bbf7c61e3acaf0e8d3f3f7f3e22832b84d2f5bf16b234" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 05 01:35:42 crc kubenswrapper[4990]: E1205 01:35:42.092368 4990 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6290c870625c2bc24d2bbf7c61e3acaf0e8d3f3f7f3e22832b84d2f5bf16b234 is running failed: container process not found" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="00beb76a-d4d2-4cd8-bc04-e268c2397388" containerName="galera" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.092248 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwwxq\" (UniqueName: \"kubernetes.io/projected/4489a490-bacc-498c-b0e3-d6b5cad13d34-kube-api-access-vwwxq\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.092428 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqrfp\" (UniqueName: \"kubernetes.io/projected/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-kube-api-access-jqrfp\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.139375 4990 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Dec 05 01:35:42 crc kubenswrapper[4990]: E1205 01:35:42.195862 4990 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 01:35:42 crc kubenswrapper[4990]: E1205 01:35:42.195954 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/64bbbfd0-59f8-4fb6-8761-503cdf8b9f36-operator-scripts podName:64bbbfd0-59f8-4fb6-8761-503cdf8b9f36 nodeName:}" failed. No retries permitted until 2025-12-05 01:35:44.1959327 +0000 UTC m=+1642.572148111 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/64bbbfd0-59f8-4fb6-8761-503cdf8b9f36-operator-scripts") pod "barbicandd52-account-delete-2dsms" (UID: "64bbbfd0-59f8-4fb6-8761-503cdf8b9f36") : configmap "openstack-scripts" not found Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.196468 4990 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:42 crc kubenswrapper[4990]: E1205 01:35:42.196552 4990 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 01:35:42 crc kubenswrapper[4990]: E1205 01:35:42.196584 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c7bf2416-2722-4ab6-a022-32116155fa68-operator-scripts podName:c7bf2416-2722-4ab6-a022-32116155fa68 nodeName:}" failed. No retries permitted until 2025-12-05 01:35:44.196572478 +0000 UTC m=+1642.572787939 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/c7bf2416-2722-4ab6-a022-32116155fa68-operator-scripts") pod "novacell05b09-account-delete-wcpvd" (UID: "c7bf2416-2722-4ab6-a022-32116155fa68") : configmap "openstack-scripts" not found Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.250886 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10384219-030b-491b-884f-fd761eba4496" (UID: "10384219-030b-491b-884f-fd761eba4496"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.298114 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:42 crc kubenswrapper[4990]: E1205 01:35:42.298133 4990 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 01:35:42 crc kubenswrapper[4990]: E1205 01:35:42.298205 4990 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 01:35:42 crc kubenswrapper[4990]: E1205 01:35:42.298208 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b325e8cb-5fb2-4543-ad3c-c9f42a4572f0-operator-scripts podName:b325e8cb-5fb2-4543-ad3c-c9f42a4572f0 nodeName:}" failed. No retries permitted until 2025-12-05 01:35:44.298190294 +0000 UTC m=+1642.674405655 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b325e8cb-5fb2-4543-ad3c-c9f42a4572f0-operator-scripts") pod "glancebd6b-account-delete-9bmrw" (UID: "b325e8cb-5fb2-4543-ad3c-c9f42a4572f0") : configmap "openstack-scripts" not found Dec 05 01:35:42 crc kubenswrapper[4990]: E1205 01:35:42.298260 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d-operator-scripts podName:82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d nodeName:}" failed. No retries permitted until 2025-12-05 01:35:44.298247116 +0000 UTC m=+1642.674462477 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d-operator-scripts") pod "novaapiea53-account-delete-m8d6d" (UID: "82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d") : configmap "openstack-scripts" not found Dec 05 01:35:42 crc kubenswrapper[4990]: E1205 01:35:42.298009 4990 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 01:35:42 crc kubenswrapper[4990]: E1205 01:35:42.298536 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ac1cabc4-d51d-43b6-8903-f098d13c1952-operator-scripts podName:ac1cabc4-d51d-43b6-8903-f098d13c1952 nodeName:}" failed. No retries permitted until 2025-12-05 01:35:44.298517613 +0000 UTC m=+1642.674732974 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/ac1cabc4-d51d-43b6-8903-f098d13c1952-operator-scripts") pod "placement06fa-account-delete-fb874" (UID: "ac1cabc4-d51d-43b6-8903-f098d13c1952") : configmap "openstack-scripts" not found Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.342214 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/510e9e75-fc35-4bed-8e71-c6e27069f50a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "510e9e75-fc35-4bed-8e71-c6e27069f50a" (UID: "510e9e75-fc35-4bed-8e71-c6e27069f50a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.342325 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d" (UID: "5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.395167 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-config-data" (OuterVolumeSpecName: "config-data") pod "2c7241f3-92bb-4295-97d9-4284784b11f3" (UID: "2c7241f3-92bb-4295-97d9-4284784b11f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.399420 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.399455 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.399471 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/510e9e75-fc35-4bed-8e71-c6e27069f50a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.421381 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-config-data" (OuterVolumeSpecName: "config-data") pod "47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3" (UID: "47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.423854 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4489a490-bacc-498c-b0e3-d6b5cad13d34-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4489a490-bacc-498c-b0e3-d6b5cad13d34" (UID: "4489a490-bacc-498c-b0e3-d6b5cad13d34"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.424974 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4489a490-bacc-498c-b0e3-d6b5cad13d34-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4489a490-bacc-498c-b0e3-d6b5cad13d34" (UID: "4489a490-bacc-498c-b0e3-d6b5cad13d34"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.435975 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/510e9e75-fc35-4bed-8e71-c6e27069f50a-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "510e9e75-fc35-4bed-8e71-c6e27069f50a" (UID: "510e9e75-fc35-4bed-8e71-c6e27069f50a"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.481796 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba3c2a5d-0bec-4905-8cba-d0e565643fe7-config-data" (OuterVolumeSpecName: "config-data") pod "ba3c2a5d-0bec-4905-8cba-d0e565643fe7" (UID: "ba3c2a5d-0bec-4905-8cba-d0e565643fe7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.502066 4990 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.511405 4990 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4489a490-bacc-498c-b0e3-d6b5cad13d34-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.511474 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.511503 4990 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.511516 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba3c2a5d-0bec-4905-8cba-d0e565643fe7-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.511524 4990 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4489a490-bacc-498c-b0e3-d6b5cad13d34-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.511535 4990 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/510e9e75-fc35-4bed-8e71-c6e27069f50a-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.538437 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "10384219-030b-491b-884f-fd761eba4496" (UID: "10384219-030b-491b-884f-fd761eba4496"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.542043 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4489a490-bacc-498c-b0e3-d6b5cad13d34-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4489a490-bacc-498c-b0e3-d6b5cad13d34" (UID: "4489a490-bacc-498c-b0e3-d6b5cad13d34"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.567962 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-config-data" (OuterVolumeSpecName: "config-data") pod "10384219-030b-491b-884f-fd761eba4496" (UID: "10384219-030b-491b-884f-fd761eba4496"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.577801 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3" (UID: "47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.580970 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c7241f3-92bb-4295-97d9-4284784b11f3" (UID: "2c7241f3-92bb-4295-97d9-4284784b11f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.608538 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82eb03c9-869c-447d-9b78-b4ef916b59ac-config-data" (OuterVolumeSpecName: "config-data") pod "82eb03c9-869c-447d-9b78-b4ef916b59ac" (UID: "82eb03c9-869c-447d-9b78-b4ef916b59ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.611253 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d" (UID: "5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.611860 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba3c2a5d-0bec-4905-8cba-d0e565643fe7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba3c2a5d-0bec-4905-8cba-d0e565643fe7" (UID: "ba3c2a5d-0bec-4905-8cba-d0e565643fe7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.612917 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82eb03c9-869c-447d-9b78-b4ef916b59ac-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.612996 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.613053 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4489a490-bacc-498c-b0e3-d6b5cad13d34-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.613118 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba3c2a5d-0bec-4905-8cba-d0e565643fe7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.613179 4990 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.613234 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.613295 4990 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.613352 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.634417 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3" (UID: "47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.638603 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "10384219-030b-491b-884f-fd761eba4496" (UID: "10384219-030b-491b-884f-fd761eba4496"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.641215 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82eb03c9-869c-447d-9b78-b4ef916b59ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82eb03c9-869c-447d-9b78-b4ef916b59ac" (UID: "82eb03c9-869c-447d-9b78-b4ef916b59ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.657080 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82eb03c9-869c-447d-9b78-b4ef916b59ac-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "82eb03c9-869c-447d-9b78-b4ef916b59ac" (UID: "82eb03c9-869c-447d-9b78-b4ef916b59ac"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.664429 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba3c2a5d-0bec-4905-8cba-d0e565643fe7-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ba3c2a5d-0bec-4905-8cba-d0e565643fe7" (UID: "ba3c2a5d-0bec-4905-8cba-d0e565643fe7"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.688286 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2c7241f3-92bb-4295-97d9-4284784b11f3" (UID: "2c7241f3-92bb-4295-97d9-4284784b11f3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.689053 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/510e9e75-fc35-4bed-8e71-c6e27069f50a-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "510e9e75-fc35-4bed-8e71-c6e27069f50a" (UID: "510e9e75-fc35-4bed-8e71-c6e27069f50a"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.692900 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2c7241f3-92bb-4295-97d9-4284784b11f3" (UID: "2c7241f3-92bb-4295-97d9-4284784b11f3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.701849 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-config-data" (OuterVolumeSpecName: "config-data") pod "5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d" (UID: "5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.702330 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4489a490-bacc-498c-b0e3-d6b5cad13d34-config-data" (OuterVolumeSpecName: "config-data") pod "4489a490-bacc-498c-b0e3-d6b5cad13d34" (UID: "4489a490-bacc-498c-b0e3-d6b5cad13d34"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.714471 4990 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba3c2a5d-0bec-4905-8cba-d0e565643fe7-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.714514 4990 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/10384219-030b-491b-884f-fd761eba4496-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.714529 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4489a490-bacc-498c-b0e3-d6b5cad13d34-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.714541 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82eb03c9-869c-447d-9b78-b4ef916b59ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.714552 4990 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.714563 4990 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.714574 4990 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82eb03c9-869c-447d-9b78-b4ef916b59ac-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.714588 4990 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/510e9e75-fc35-4bed-8e71-c6e27069f50a-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.714599 4990 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c7241f3-92bb-4295-97d9-4284784b11f3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.714612 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:42 crc kubenswrapper[4990]: E1205 01:35:42.714560 4990 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Dec 05 01:35:42 crc kubenswrapper[4990]: E1205 01:35:42.714671 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/809c1920-3205-411c-a8c1-ed027b7e3b1f-config-data podName:809c1920-3205-411c-a8c1-ed027b7e3b1f nodeName:}" failed. No retries permitted until 2025-12-05 01:35:50.714654899 +0000 UTC m=+1649.090870260 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/809c1920-3205-411c-a8c1-ed027b7e3b1f-config-data") pod "rabbitmq-server-0" (UID: "809c1920-3205-411c-a8c1-ed027b7e3b1f") : configmap "rabbitmq-config-data" not found Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.723550 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82eb03c9-869c-447d-9b78-b4ef916b59ac-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "82eb03c9-869c-447d-9b78-b4ef916b59ac" (UID: "82eb03c9-869c-447d-9b78-b4ef916b59ac"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.815685 4990 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82eb03c9-869c-447d-9b78-b4ef916b59ac-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.896438 4990 generic.go:334] "Generic (PLEG): container finished" podID="dd33dbb9-4e51-47db-8129-a93493234f7f" containerID="ddee66ac66bbe9676ade95661263c28f7cfb48141f52a3c8dcd54d952118736b" exitCode=0 Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.898714 4990 generic.go:334] "Generic (PLEG): container finished" podID="82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d" containerID="3511ef3c763384814d89ad4ea38df77c466fcbb49bbb254ff4f1acd94b9088c2" exitCode=1 Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.901779 4990 generic.go:334] "Generic (PLEG): container finished" podID="ab630416-46f3-495f-92c2-732abce81632" containerID="d02c97f717b88563cf577b01b6bef8cd4858482f699b83682b846f62dbb607ec" exitCode=1 Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.903895 4990 generic.go:334] "Generic (PLEG): container finished" podID="ac1cabc4-d51d-43b6-8903-f098d13c1952" containerID="1648c0a8495c50ad4a23e1a6ed5f5b8f6c1c09dccf4ff6d9981248049ef54dbf" exitCode=1 Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.906838 4990 generic.go:334] "Generic (PLEG): container finished" podID="64bbbfd0-59f8-4fb6-8761-503cdf8b9f36" containerID="6d4760603a654f0855f0b6eefff463312ee7c990e1aafe561c11bc0ce58c2194" exitCode=1 Dec 05 01:35:42 crc kubenswrapper[4990]: I1205 01:35:42.914063 4990 generic.go:334] "Generic (PLEG): container finished" podID="c7bf2416-2722-4ab6-a022-32116155fa68" containerID="bfd26f3b6ec2e81ecb4f3ded5d8569c766eac173e5de019ab3d80eb2d6da588e" exitCode=1 Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:42.998604 4990 generic.go:334] "Generic (PLEG): container finished" podID="ed473a7a-f068-49a3-ae4c-b57b39e33b28" containerID="6a124f2ceb58f1b28fd7e33d50fc28756c66696a4774e8efa70e6a53e7a97329" exitCode=0 Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.004071 4990 generic.go:334] "Generic (PLEG): container finished" podID="809c1920-3205-411c-a8c1-ed027b7e3b1f" containerID="39a3ea367ecbac2fdb7b56ed37380e3e71e8af696eebed8fe12028523b333328" exitCode=0 Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.008982 4990 generic.go:334] "Generic (PLEG): container finished" podID="00beb76a-d4d2-4cd8-bc04-e268c2397388" containerID="6290c870625c2bc24d2bbf7c61e3acaf0e8d3f3f7f3e22832b84d2f5bf16b234" exitCode=0 Dec 05 01:35:43 crc kubenswrapper[4990]: E1205 01:35:43.021534 4990 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 01:35:43 crc kubenswrapper[4990]: E1205 01:35:43.021594 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab630416-46f3-495f-92c2-732abce81632-operator-scripts podName:ab630416-46f3-495f-92c2-732abce81632 nodeName:}" failed. No retries permitted until 2025-12-05 01:35:47.021579135 +0000 UTC m=+1645.397794496 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/ab630416-46f3-495f-92c2-732abce81632-operator-scripts") pod "cindera4a4-account-delete-kpsxn" (UID: "ab630416-46f3-495f-92c2-732abce81632") : configmap "openstack-scripts" not found Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.032904 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 01:35:43 crc kubenswrapper[4990]: E1205 01:35:43.145096 4990 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.216s" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.145272 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5dc9df8c96-j8dx7" event={"ID":"dd33dbb9-4e51-47db-8129-a93493234f7f","Type":"ContainerDied","Data":"ddee66ac66bbe9676ade95661263c28f7cfb48141f52a3c8dcd54d952118736b"} Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.145315 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapiea53-account-delete-m8d6d" event={"ID":"82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d","Type":"ContainerDied","Data":"3511ef3c763384814d89ad4ea38df77c466fcbb49bbb254ff4f1acd94b9088c2"} Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.145339 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cindera4a4-account-delete-kpsxn" event={"ID":"ab630416-46f3-495f-92c2-732abce81632","Type":"ContainerDied","Data":"d02c97f717b88563cf577b01b6bef8cd4858482f699b83682b846f62dbb607ec"} Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.145360 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement06fa-account-delete-fb874" event={"ID":"ac1cabc4-d51d-43b6-8903-f098d13c1952","Type":"ContainerDied","Data":"1648c0a8495c50ad4a23e1a6ed5f5b8f6c1c09dccf4ff6d9981248049ef54dbf"} Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.145382 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbicandd52-account-delete-2dsms" event={"ID":"64bbbfd0-59f8-4fb6-8761-503cdf8b9f36","Type":"ContainerDied","Data":"6d4760603a654f0855f0b6eefff463312ee7c990e1aafe561c11bc0ce58c2194"} Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.145405 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell05b09-account-delete-wcpvd" event={"ID":"c7bf2416-2722-4ab6-a022-32116155fa68","Type":"ContainerDied","Data":"bfd26f3b6ec2e81ecb4f3ded5d8569c766eac173e5de019ab3d80eb2d6da588e"} Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.145425 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ed473a7a-f068-49a3-ae4c-b57b39e33b28","Type":"ContainerDied","Data":"6a124f2ceb58f1b28fd7e33d50fc28756c66696a4774e8efa70e6a53e7a97329"} Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.145445 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ed473a7a-f068-49a3-ae4c-b57b39e33b28","Type":"ContainerDied","Data":"2463bc5c975a8980bdf5525196080f584d67342d7e96699073d6039c30d943d0"} Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.145462 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2463bc5c975a8980bdf5525196080f584d67342d7e96699073d6039c30d943d0" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.145478 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"809c1920-3205-411c-a8c1-ed027b7e3b1f","Type":"ContainerDied","Data":"39a3ea367ecbac2fdb7b56ed37380e3e71e8af696eebed8fe12028523b333328"} Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.145521 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"00beb76a-d4d2-4cd8-bc04-e268c2397388","Type":"ContainerDied","Data":"6290c870625c2bc24d2bbf7c61e3acaf0e8d3f3f7f3e22832b84d2f5bf16b234"} Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.145540 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"00beb76a-d4d2-4cd8-bc04-e268c2397388","Type":"ContainerDied","Data":"f0a0fcb7173185444d18bcea3eb5a01cdd9f7a5e66c7e956200719c63cb6978e"} Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.145555 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0a0fcb7173185444d18bcea3eb5a01cdd9f7a5e66c7e956200719c63cb6978e" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.145569 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glancebd6b-account-delete-9bmrw" event={"ID":"b325e8cb-5fb2-4543-ad3c-c9f42a4572f0","Type":"ContainerDied","Data":"dd7173133a9e289c66b00f347477e4f6aac22f12b8529a054821920fd94abee6"} Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.145586 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd7173133a9e289c66b00f347477e4f6aac22f12b8529a054821920fd94abee6" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.155464 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.191159 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.198295 4990 scope.go:117] "RemoveContainer" containerID="d20b6d3367c7e8bc8e6b2c77a261707a5e35a27706e2fb7941de8c18a86ffb76" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.201563 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.234364 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutronc36c-account-delete-nt5gn" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.246396 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.248439 4990 scope.go:117] "RemoveContainer" containerID="327fbdffbbf5497e0035e561919c3378443e7b079ffc004906f3ba795a9a7d96" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.255516 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.255937 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.262698 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glancebd6b-account-delete-9bmrw" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.266877 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-78f948dd74-zmh7q"] Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.276593 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.281223 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-78f948dd74-zmh7q"] Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.286451 4990 scope.go:117] "RemoveContainer" containerID="8b4a5c95a5bafa9fdced2d0ae48a57fdaa45a090e28899f2b4a2464fc1d5c263" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.297508 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-74bb84bc86-8krlf"] Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.297940 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.302340 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5dc9df8c96-j8dx7" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.305804 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-74bb84bc86-8krlf"] Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.308727 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.321915 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.325650 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d7643ce-5dd7-48dc-9023-6502e5b0a05a-operator-scripts\") pod \"0d7643ce-5dd7-48dc-9023-6502e5b0a05a\" (UID: \"0d7643ce-5dd7-48dc-9023-6502e5b0a05a\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.325689 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dc80822-8cd5-4004-abdd-160ad6dcdd72-combined-ca-bundle\") pod \"0dc80822-8cd5-4004-abdd-160ad6dcdd72\" (UID: \"0dc80822-8cd5-4004-abdd-160ad6dcdd72\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.325734 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd4b299f-9ab6-4714-b911-9b1e11708f39-memcached-tls-certs\") pod \"cd4b299f-9ab6-4714-b911-9b1e11708f39\" (UID: \"cd4b299f-9ab6-4714-b911-9b1e11708f39\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.325760 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgdvn\" (UniqueName: \"kubernetes.io/projected/0d7643ce-5dd7-48dc-9023-6502e5b0a05a-kube-api-access-cgdvn\") pod \"0d7643ce-5dd7-48dc-9023-6502e5b0a05a\" (UID: \"0d7643ce-5dd7-48dc-9023-6502e5b0a05a\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.325776 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcs4p\" (UniqueName: \"kubernetes.io/projected/23fef2f1-b3e2-4d6f-8beb-efd01386d758-kube-api-access-xcs4p\") pod \"23fef2f1-b3e2-4d6f-8beb-efd01386d758\" (UID: \"23fef2f1-b3e2-4d6f-8beb-efd01386d758\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.325830 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cd4b299f-9ab6-4714-b911-9b1e11708f39-kolla-config\") pod \"cd4b299f-9ab6-4714-b911-9b1e11708f39\" (UID: \"cd4b299f-9ab6-4714-b911-9b1e11708f39\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.325846 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23fef2f1-b3e2-4d6f-8beb-efd01386d758-combined-ca-bundle\") pod \"23fef2f1-b3e2-4d6f-8beb-efd01386d758\" (UID: \"23fef2f1-b3e2-4d6f-8beb-efd01386d758\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.325862 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23fef2f1-b3e2-4d6f-8beb-efd01386d758-config-data\") pod \"23fef2f1-b3e2-4d6f-8beb-efd01386d758\" (UID: \"23fef2f1-b3e2-4d6f-8beb-efd01386d758\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.325881 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dc80822-8cd5-4004-abdd-160ad6dcdd72-config-data\") pod \"0dc80822-8cd5-4004-abdd-160ad6dcdd72\" (UID: \"0dc80822-8cd5-4004-abdd-160ad6dcdd72\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.325900 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56nkv\" (UniqueName: \"kubernetes.io/projected/0dc80822-8cd5-4004-abdd-160ad6dcdd72-kube-api-access-56nkv\") pod \"0dc80822-8cd5-4004-abdd-160ad6dcdd72\" (UID: \"0dc80822-8cd5-4004-abdd-160ad6dcdd72\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.325931 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cd4b299f-9ab6-4714-b911-9b1e11708f39-config-data\") pod \"cd4b299f-9ab6-4714-b911-9b1e11708f39\" (UID: \"cd4b299f-9ab6-4714-b911-9b1e11708f39\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.325961 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd4b299f-9ab6-4714-b911-9b1e11708f39-combined-ca-bundle\") pod \"cd4b299f-9ab6-4714-b911-9b1e11708f39\" (UID: \"cd4b299f-9ab6-4714-b911-9b1e11708f39\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.326024 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kxhw\" (UniqueName: \"kubernetes.io/projected/cd4b299f-9ab6-4714-b911-9b1e11708f39-kube-api-access-8kxhw\") pod \"cd4b299f-9ab6-4714-b911-9b1e11708f39\" (UID: \"cd4b299f-9ab6-4714-b911-9b1e11708f39\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.327450 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d7643ce-5dd7-48dc-9023-6502e5b0a05a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0d7643ce-5dd7-48dc-9023-6502e5b0a05a" (UID: "0d7643ce-5dd7-48dc-9023-6502e5b0a05a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.327846 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd4b299f-9ab6-4714-b911-9b1e11708f39-config-data" (OuterVolumeSpecName: "config-data") pod "cd4b299f-9ab6-4714-b911-9b1e11708f39" (UID: "cd4b299f-9ab6-4714-b911-9b1e11708f39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.333245 4990 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d7643ce-5dd7-48dc-9023-6502e5b0a05a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.333266 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cd4b299f-9ab6-4714-b911-9b1e11708f39-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.336978 4990 scope.go:117] "RemoveContainer" containerID="6ddb956e1bc6923210a8635165e707d600786748db6d8f83199e419eecac98d4" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.340809 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd4b299f-9ab6-4714-b911-9b1e11708f39-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "cd4b299f-9ab6-4714-b911-9b1e11708f39" (UID: "cd4b299f-9ab6-4714-b911-9b1e11708f39"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.341823 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.343330 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d7643ce-5dd7-48dc-9023-6502e5b0a05a-kube-api-access-cgdvn" (OuterVolumeSpecName: "kube-api-access-cgdvn") pod "0d7643ce-5dd7-48dc-9023-6502e5b0a05a" (UID: "0d7643ce-5dd7-48dc-9023-6502e5b0a05a"). InnerVolumeSpecName "kube-api-access-cgdvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.345240 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dc80822-8cd5-4004-abdd-160ad6dcdd72-kube-api-access-56nkv" (OuterVolumeSpecName: "kube-api-access-56nkv") pod "0dc80822-8cd5-4004-abdd-160ad6dcdd72" (UID: "0dc80822-8cd5-4004-abdd-160ad6dcdd72"). InnerVolumeSpecName "kube-api-access-56nkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.354763 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23fef2f1-b3e2-4d6f-8beb-efd01386d758-kube-api-access-xcs4p" (OuterVolumeSpecName: "kube-api-access-xcs4p") pod "23fef2f1-b3e2-4d6f-8beb-efd01386d758" (UID: "23fef2f1-b3e2-4d6f-8beb-efd01386d758"). InnerVolumeSpecName "kube-api-access-xcs4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.360599 4990 scope.go:117] "RemoveContainer" containerID="57746098ad306cb3039f8ee75ec4203722dcaf04ca8ec10ec4bae99de921040a" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.367591 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6c5f858c6d-zxwsh"] Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.375210 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6c5f858c6d-zxwsh"] Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.383531 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd4b299f-9ab6-4714-b911-9b1e11708f39-kube-api-access-8kxhw" (OuterVolumeSpecName: "kube-api-access-8kxhw") pod "cd4b299f-9ab6-4714-b911-9b1e11708f39" (UID: "cd4b299f-9ab6-4714-b911-9b1e11708f39"). InnerVolumeSpecName "kube-api-access-8kxhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.394960 4990 scope.go:117] "RemoveContainer" containerID="ddbc25bf62fa17b335529abe8efcf931bb13fc14cc17b4da59cbfeb8d6d41a3a" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.414438 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.423467 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.425119 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dc80822-8cd5-4004-abdd-160ad6dcdd72-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0dc80822-8cd5-4004-abdd-160ad6dcdd72" (UID: "0dc80822-8cd5-4004-abdd-160ad6dcdd72"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.434617 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/00beb76a-d4d2-4cd8-bc04-e268c2397388-galera-tls-certs\") pod \"00beb76a-d4d2-4cd8-bc04-e268c2397388\" (UID: \"00beb76a-d4d2-4cd8-bc04-e268c2397388\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.434662 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed473a7a-f068-49a3-ae4c-b57b39e33b28-config-data\") pod \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.434706 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b5ac2be-fc48-4bde-a668-b3549462a101-combined-ca-bundle\") pod \"4b5ac2be-fc48-4bde-a668-b3549462a101\" (UID: \"4b5ac2be-fc48-4bde-a668-b3549462a101\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.434732 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ed473a7a-f068-49a3-ae4c-b57b39e33b28-plugins-conf\") pod \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.434753 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b325e8cb-5fb2-4543-ad3c-c9f42a4572f0-operator-scripts\") pod \"b325e8cb-5fb2-4543-ad3c-c9f42a4572f0\" (UID: \"b325e8cb-5fb2-4543-ad3c-c9f42a4572f0\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.434770 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/809c1920-3205-411c-a8c1-ed027b7e3b1f-rabbitmq-confd\") pod \"809c1920-3205-411c-a8c1-ed027b7e3b1f\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.434792 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/809c1920-3205-411c-a8c1-ed027b7e3b1f-pod-info\") pod \"809c1920-3205-411c-a8c1-ed027b7e3b1f\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.434821 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ed473a7a-f068-49a3-ae4c-b57b39e33b28-erlang-cookie-secret\") pod \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.434843 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ed473a7a-f068-49a3-ae4c-b57b39e33b28-rabbitmq-tls\") pod \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.434906 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"809c1920-3205-411c-a8c1-ed027b7e3b1f\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.434941 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-credential-keys\") pod \"dd33dbb9-4e51-47db-8129-a93493234f7f\" (UID: \"dd33dbb9-4e51-47db-8129-a93493234f7f\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.434975 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24ms9\" (UniqueName: \"kubernetes.io/projected/809c1920-3205-411c-a8c1-ed027b7e3b1f-kube-api-access-24ms9\") pod \"809c1920-3205-411c-a8c1-ed027b7e3b1f\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.435006 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/809c1920-3205-411c-a8c1-ed027b7e3b1f-server-conf\") pod \"809c1920-3205-411c-a8c1-ed027b7e3b1f\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.435022 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b5ac2be-fc48-4bde-a668-b3549462a101-config-data\") pod \"4b5ac2be-fc48-4bde-a668-b3549462a101\" (UID: \"4b5ac2be-fc48-4bde-a668-b3549462a101\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.435044 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"00beb76a-d4d2-4cd8-bc04-e268c2397388\" (UID: \"00beb76a-d4d2-4cd8-bc04-e268c2397388\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.435065 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/00beb76a-d4d2-4cd8-bc04-e268c2397388-kolla-config\") pod \"00beb76a-d4d2-4cd8-bc04-e268c2397388\" (UID: \"00beb76a-d4d2-4cd8-bc04-e268c2397388\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.435093 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00beb76a-d4d2-4cd8-bc04-e268c2397388-operator-scripts\") pod \"00beb76a-d4d2-4cd8-bc04-e268c2397388\" (UID: \"00beb76a-d4d2-4cd8-bc04-e268c2397388\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.435108 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjvf7\" (UniqueName: \"kubernetes.io/projected/00beb76a-d4d2-4cd8-bc04-e268c2397388-kube-api-access-wjvf7\") pod \"00beb76a-d4d2-4cd8-bc04-e268c2397388\" (UID: \"00beb76a-d4d2-4cd8-bc04-e268c2397388\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.435125 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ed473a7a-f068-49a3-ae4c-b57b39e33b28-pod-info\") pod \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.435140 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/809c1920-3205-411c-a8c1-ed027b7e3b1f-erlang-cookie-secret\") pod \"809c1920-3205-411c-a8c1-ed027b7e3b1f\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.435156 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-config-data\") pod \"dd33dbb9-4e51-47db-8129-a93493234f7f\" (UID: \"dd33dbb9-4e51-47db-8129-a93493234f7f\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.435171 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh5m4\" (UniqueName: \"kubernetes.io/projected/ed473a7a-f068-49a3-ae4c-b57b39e33b28-kube-api-access-dh5m4\") pod \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.435191 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ed473a7a-f068-49a3-ae4c-b57b39e33b28-rabbitmq-plugins\") pod \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.435206 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/809c1920-3205-411c-a8c1-ed027b7e3b1f-rabbitmq-erlang-cookie\") pod \"809c1920-3205-411c-a8c1-ed027b7e3b1f\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.435227 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ed473a7a-f068-49a3-ae4c-b57b39e33b28-rabbitmq-confd\") pod \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.435262 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-internal-tls-certs\") pod \"dd33dbb9-4e51-47db-8129-a93493234f7f\" (UID: \"dd33dbb9-4e51-47db-8129-a93493234f7f\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.435287 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4ds7\" (UniqueName: \"kubernetes.io/projected/4b5ac2be-fc48-4bde-a668-b3549462a101-kube-api-access-r4ds7\") pod \"4b5ac2be-fc48-4bde-a668-b3549462a101\" (UID: \"4b5ac2be-fc48-4bde-a668-b3549462a101\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.435317 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/00beb76a-d4d2-4cd8-bc04-e268c2397388-config-data-default\") pod \"00beb76a-d4d2-4cd8-bc04-e268c2397388\" (UID: \"00beb76a-d4d2-4cd8-bc04-e268c2397388\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.435336 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b5ac2be-fc48-4bde-a668-b3549462a101-logs\") pod \"4b5ac2be-fc48-4bde-a668-b3549462a101\" (UID: \"4b5ac2be-fc48-4bde-a668-b3549462a101\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.435350 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b5ac2be-fc48-4bde-a668-b3549462a101-public-tls-certs\") pod \"4b5ac2be-fc48-4bde-a668-b3549462a101\" (UID: \"4b5ac2be-fc48-4bde-a668-b3549462a101\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.435364 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.435385 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00beb76a-d4d2-4cd8-bc04-e268c2397388-combined-ca-bundle\") pod \"00beb76a-d4d2-4cd8-bc04-e268c2397388\" (UID: \"00beb76a-d4d2-4cd8-bc04-e268c2397388\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.435402 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/00beb76a-d4d2-4cd8-bc04-e268c2397388-config-data-generated\") pod \"00beb76a-d4d2-4cd8-bc04-e268c2397388\" (UID: \"00beb76a-d4d2-4cd8-bc04-e268c2397388\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.435429 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcss5\" (UniqueName: \"kubernetes.io/projected/b325e8cb-5fb2-4543-ad3c-c9f42a4572f0-kube-api-access-fcss5\") pod \"b325e8cb-5fb2-4543-ad3c-c9f42a4572f0\" (UID: \"b325e8cb-5fb2-4543-ad3c-c9f42a4572f0\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.435446 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b5ac2be-fc48-4bde-a668-b3549462a101-internal-tls-certs\") pod \"4b5ac2be-fc48-4bde-a668-b3549462a101\" (UID: \"4b5ac2be-fc48-4bde-a668-b3549462a101\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.435472 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88rxk\" (UniqueName: \"kubernetes.io/projected/dd33dbb9-4e51-47db-8129-a93493234f7f-kube-api-access-88rxk\") pod \"dd33dbb9-4e51-47db-8129-a93493234f7f\" (UID: \"dd33dbb9-4e51-47db-8129-a93493234f7f\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.435535 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/809c1920-3205-411c-a8c1-ed027b7e3b1f-rabbitmq-tls\") pod \"809c1920-3205-411c-a8c1-ed027b7e3b1f\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.435560 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-combined-ca-bundle\") pod \"dd33dbb9-4e51-47db-8129-a93493234f7f\" (UID: \"dd33dbb9-4e51-47db-8129-a93493234f7f\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.435585 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-fernet-keys\") pod \"dd33dbb9-4e51-47db-8129-a93493234f7f\" (UID: \"dd33dbb9-4e51-47db-8129-a93493234f7f\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.435617 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/809c1920-3205-411c-a8c1-ed027b7e3b1f-config-data\") pod \"809c1920-3205-411c-a8c1-ed027b7e3b1f\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.435638 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ed473a7a-f068-49a3-ae4c-b57b39e33b28-server-conf\") pod \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.435655 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ed473a7a-f068-49a3-ae4c-b57b39e33b28-rabbitmq-erlang-cookie\") pod \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\" (UID: \"ed473a7a-f068-49a3-ae4c-b57b39e33b28\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.435680 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-scripts\") pod \"dd33dbb9-4e51-47db-8129-a93493234f7f\" (UID: \"dd33dbb9-4e51-47db-8129-a93493234f7f\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.435714 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/809c1920-3205-411c-a8c1-ed027b7e3b1f-rabbitmq-plugins\") pod \"809c1920-3205-411c-a8c1-ed027b7e3b1f\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.435737 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/809c1920-3205-411c-a8c1-ed027b7e3b1f-plugins-conf\") pod \"809c1920-3205-411c-a8c1-ed027b7e3b1f\" (UID: \"809c1920-3205-411c-a8c1-ed027b7e3b1f\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.435755 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-public-tls-certs\") pod \"dd33dbb9-4e51-47db-8129-a93493234f7f\" (UID: \"dd33dbb9-4e51-47db-8129-a93493234f7f\") " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.436218 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kxhw\" (UniqueName: \"kubernetes.io/projected/cd4b299f-9ab6-4714-b911-9b1e11708f39-kube-api-access-8kxhw\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.436232 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dc80822-8cd5-4004-abdd-160ad6dcdd72-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.436245 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgdvn\" (UniqueName: \"kubernetes.io/projected/0d7643ce-5dd7-48dc-9023-6502e5b0a05a-kube-api-access-cgdvn\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.436256 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcs4p\" (UniqueName: \"kubernetes.io/projected/23fef2f1-b3e2-4d6f-8beb-efd01386d758-kube-api-access-xcs4p\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.436268 4990 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cd4b299f-9ab6-4714-b911-9b1e11708f39-kolla-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.436280 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56nkv\" (UniqueName: \"kubernetes.io/projected/0dc80822-8cd5-4004-abdd-160ad6dcdd72-kube-api-access-56nkv\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.436300 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b325e8cb-5fb2-4543-ad3c-c9f42a4572f0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b325e8cb-5fb2-4543-ad3c-c9f42a4572f0" (UID: "b325e8cb-5fb2-4543-ad3c-c9f42a4572f0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.438212 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00beb76a-d4d2-4cd8-bc04-e268c2397388-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "00beb76a-d4d2-4cd8-bc04-e268c2397388" (UID: "00beb76a-d4d2-4cd8-bc04-e268c2397388"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.438524 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.443514 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.445161 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed473a7a-f068-49a3-ae4c-b57b39e33b28-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "ed473a7a-f068-49a3-ae4c-b57b39e33b28" (UID: "ed473a7a-f068-49a3-ae4c-b57b39e33b28"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.445402 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b5ac2be-fc48-4bde-a668-b3549462a101-logs" (OuterVolumeSpecName: "logs") pod "4b5ac2be-fc48-4bde-a668-b3549462a101" (UID: "4b5ac2be-fc48-4bde-a668-b3549462a101"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.445470 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed473a7a-f068-49a3-ae4c-b57b39e33b28-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "ed473a7a-f068-49a3-ae4c-b57b39e33b28" (UID: "ed473a7a-f068-49a3-ae4c-b57b39e33b28"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.445968 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00beb76a-d4d2-4cd8-bc04-e268c2397388-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "00beb76a-d4d2-4cd8-bc04-e268c2397388" (UID: "00beb76a-d4d2-4cd8-bc04-e268c2397388"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.446912 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/809c1920-3205-411c-a8c1-ed027b7e3b1f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "809c1920-3205-411c-a8c1-ed027b7e3b1f" (UID: "809c1920-3205-411c-a8c1-ed027b7e3b1f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.447030 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/809c1920-3205-411c-a8c1-ed027b7e3b1f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "809c1920-3205-411c-a8c1-ed027b7e3b1f" (UID: "809c1920-3205-411c-a8c1-ed027b7e3b1f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.447835 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00beb76a-d4d2-4cd8-bc04-e268c2397388-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "00beb76a-d4d2-4cd8-bc04-e268c2397388" (UID: "00beb76a-d4d2-4cd8-bc04-e268c2397388"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.448390 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/809c1920-3205-411c-a8c1-ed027b7e3b1f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "809c1920-3205-411c-a8c1-ed027b7e3b1f" (UID: "809c1920-3205-411c-a8c1-ed027b7e3b1f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.452358 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/809c1920-3205-411c-a8c1-ed027b7e3b1f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "809c1920-3205-411c-a8c1-ed027b7e3b1f" (UID: "809c1920-3205-411c-a8c1-ed027b7e3b1f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.452400 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed473a7a-f068-49a3-ae4c-b57b39e33b28-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "ed473a7a-f068-49a3-ae4c-b57b39e33b28" (UID: "ed473a7a-f068-49a3-ae4c-b57b39e33b28"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.453213 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00beb76a-d4d2-4cd8-bc04-e268c2397388-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "00beb76a-d4d2-4cd8-bc04-e268c2397388" (UID: "00beb76a-d4d2-4cd8-bc04-e268c2397388"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.483705 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "dd33dbb9-4e51-47db-8129-a93493234f7f" (UID: "dd33dbb9-4e51-47db-8129-a93493234f7f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.485153 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/809c1920-3205-411c-a8c1-ed027b7e3b1f-pod-info" (OuterVolumeSpecName: "pod-info") pod "809c1920-3205-411c-a8c1-ed027b7e3b1f" (UID: "809c1920-3205-411c-a8c1-ed027b7e3b1f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.487687 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23fef2f1-b3e2-4d6f-8beb-efd01386d758-config-data" (OuterVolumeSpecName: "config-data") pod "23fef2f1-b3e2-4d6f-8beb-efd01386d758" (UID: "23fef2f1-b3e2-4d6f-8beb-efd01386d758"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.488274 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/809c1920-3205-411c-a8c1-ed027b7e3b1f-kube-api-access-24ms9" (OuterVolumeSpecName: "kube-api-access-24ms9") pod "809c1920-3205-411c-a8c1-ed027b7e3b1f" (UID: "809c1920-3205-411c-a8c1-ed027b7e3b1f"). InnerVolumeSpecName "kube-api-access-24ms9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.491735 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/809c1920-3205-411c-a8c1-ed027b7e3b1f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "809c1920-3205-411c-a8c1-ed027b7e3b1f" (UID: "809c1920-3205-411c-a8c1-ed027b7e3b1f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.498388 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed473a7a-f068-49a3-ae4c-b57b39e33b28-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "ed473a7a-f068-49a3-ae4c-b57b39e33b28" (UID: "ed473a7a-f068-49a3-ae4c-b57b39e33b28"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.500016 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed473a7a-f068-49a3-ae4c-b57b39e33b28-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "ed473a7a-f068-49a3-ae4c-b57b39e33b28" (UID: "ed473a7a-f068-49a3-ae4c-b57b39e33b28"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.500424 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "809c1920-3205-411c-a8c1-ed027b7e3b1f" (UID: "809c1920-3205-411c-a8c1-ed027b7e3b1f"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.500648 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "persistence") pod "ed473a7a-f068-49a3-ae4c-b57b39e33b28" (UID: "ed473a7a-f068-49a3-ae4c-b57b39e33b28"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.500970 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "dd33dbb9-4e51-47db-8129-a93493234f7f" (UID: "dd33dbb9-4e51-47db-8129-a93493234f7f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.509113 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-scripts" (OuterVolumeSpecName: "scripts") pod "dd33dbb9-4e51-47db-8129-a93493234f7f" (UID: "dd33dbb9-4e51-47db-8129-a93493234f7f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.509162 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b325e8cb-5fb2-4543-ad3c-c9f42a4572f0-kube-api-access-fcss5" (OuterVolumeSpecName: "kube-api-access-fcss5") pod "b325e8cb-5fb2-4543-ad3c-c9f42a4572f0" (UID: "b325e8cb-5fb2-4543-ad3c-c9f42a4572f0"). InnerVolumeSpecName "kube-api-access-fcss5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.509197 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed473a7a-f068-49a3-ae4c-b57b39e33b28-kube-api-access-dh5m4" (OuterVolumeSpecName: "kube-api-access-dh5m4") pod "ed473a7a-f068-49a3-ae4c-b57b39e33b28" (UID: "ed473a7a-f068-49a3-ae4c-b57b39e33b28"). InnerVolumeSpecName "kube-api-access-dh5m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.511917 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dc80822-8cd5-4004-abdd-160ad6dcdd72-config-data" (OuterVolumeSpecName: "config-data") pod "0dc80822-8cd5-4004-abdd-160ad6dcdd72" (UID: "0dc80822-8cd5-4004-abdd-160ad6dcdd72"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.511952 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/ed473a7a-f068-49a3-ae4c-b57b39e33b28-pod-info" (OuterVolumeSpecName: "pod-info") pod "ed473a7a-f068-49a3-ae4c-b57b39e33b28" (UID: "ed473a7a-f068-49a3-ae4c-b57b39e33b28"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.511987 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd33dbb9-4e51-47db-8129-a93493234f7f-kube-api-access-88rxk" (OuterVolumeSpecName: "kube-api-access-88rxk") pod "dd33dbb9-4e51-47db-8129-a93493234f7f" (UID: "dd33dbb9-4e51-47db-8129-a93493234f7f"). InnerVolumeSpecName "kube-api-access-88rxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.515669 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00beb76a-d4d2-4cd8-bc04-e268c2397388-kube-api-access-wjvf7" (OuterVolumeSpecName: "kube-api-access-wjvf7") pod "00beb76a-d4d2-4cd8-bc04-e268c2397388" (UID: "00beb76a-d4d2-4cd8-bc04-e268c2397388"). InnerVolumeSpecName "kube-api-access-wjvf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.518811 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b5ac2be-fc48-4bde-a668-b3549462a101-kube-api-access-r4ds7" (OuterVolumeSpecName: "kube-api-access-r4ds7") pod "4b5ac2be-fc48-4bde-a668-b3549462a101" (UID: "4b5ac2be-fc48-4bde-a668-b3549462a101"). InnerVolumeSpecName "kube-api-access-r4ds7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.539680 4990 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ed473a7a-f068-49a3-ae4c-b57b39e33b28-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.539712 4990 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.539721 4990 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/809c1920-3205-411c-a8c1-ed027b7e3b1f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.539730 4990 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/809c1920-3205-411c-a8c1-ed027b7e3b1f-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.539741 4990 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ed473a7a-f068-49a3-ae4c-b57b39e33b28-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.539750 4990 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b325e8cb-5fb2-4543-ad3c-c9f42a4572f0-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.539758 4990 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/809c1920-3205-411c-a8c1-ed027b7e3b1f-pod-info\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.539767 4990 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ed473a7a-f068-49a3-ae4c-b57b39e33b28-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.539789 4990 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ed473a7a-f068-49a3-ae4c-b57b39e33b28-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.539818 4990 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.539827 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23fef2f1-b3e2-4d6f-8beb-efd01386d758-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.539836 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dc80822-8cd5-4004-abdd-160ad6dcdd72-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.539846 4990 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.539854 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24ms9\" (UniqueName: \"kubernetes.io/projected/809c1920-3205-411c-a8c1-ed027b7e3b1f-kube-api-access-24ms9\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.539862 4990 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/00beb76a-d4d2-4cd8-bc04-e268c2397388-kolla-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.539870 4990 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00beb76a-d4d2-4cd8-bc04-e268c2397388-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.539878 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjvf7\" (UniqueName: \"kubernetes.io/projected/00beb76a-d4d2-4cd8-bc04-e268c2397388-kube-api-access-wjvf7\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.539886 4990 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ed473a7a-f068-49a3-ae4c-b57b39e33b28-pod-info\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.539893 4990 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/809c1920-3205-411c-a8c1-ed027b7e3b1f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.539901 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dh5m4\" (UniqueName: \"kubernetes.io/projected/ed473a7a-f068-49a3-ae4c-b57b39e33b28-kube-api-access-dh5m4\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.539911 4990 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ed473a7a-f068-49a3-ae4c-b57b39e33b28-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.539919 4990 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/809c1920-3205-411c-a8c1-ed027b7e3b1f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.539927 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4ds7\" (UniqueName: \"kubernetes.io/projected/4b5ac2be-fc48-4bde-a668-b3549462a101-kube-api-access-r4ds7\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.539935 4990 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/00beb76a-d4d2-4cd8-bc04-e268c2397388-config-data-default\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.539943 4990 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b5ac2be-fc48-4bde-a668-b3549462a101-logs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.539956 4990 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.539965 4990 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/00beb76a-d4d2-4cd8-bc04-e268c2397388-config-data-generated\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.539973 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcss5\" (UniqueName: \"kubernetes.io/projected/b325e8cb-5fb2-4543-ad3c-c9f42a4572f0-kube-api-access-fcss5\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.539982 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88rxk\" (UniqueName: \"kubernetes.io/projected/dd33dbb9-4e51-47db-8129-a93493234f7f-kube-api-access-88rxk\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.539991 4990 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/809c1920-3205-411c-a8c1-ed027b7e3b1f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.539999 4990 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.540557 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "mysql-db") pod "00beb76a-d4d2-4cd8-bc04-e268c2397388" (UID: "00beb76a-d4d2-4cd8-bc04-e268c2397388"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.573650 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00beb76a-d4d2-4cd8-bc04-e268c2397388-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00beb76a-d4d2-4cd8-bc04-e268c2397388" (UID: "00beb76a-d4d2-4cd8-bc04-e268c2397388"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.617769 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23fef2f1-b3e2-4d6f-8beb-efd01386d758-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23fef2f1-b3e2-4d6f-8beb-efd01386d758" (UID: "23fef2f1-b3e2-4d6f-8beb-efd01386d758"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.619668 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-config-data" (OuterVolumeSpecName: "config-data") pod "dd33dbb9-4e51-47db-8129-a93493234f7f" (UID: "dd33dbb9-4e51-47db-8129-a93493234f7f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.637669 4990 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.655551 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23fef2f1-b3e2-4d6f-8beb-efd01386d758-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.655580 4990 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.655608 4990 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.655621 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.655633 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00beb76a-d4d2-4cd8-bc04-e268c2397388-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.659600 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd33dbb9-4e51-47db-8129-a93493234f7f" (UID: "dd33dbb9-4e51-47db-8129-a93493234f7f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.668459 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/809c1920-3205-411c-a8c1-ed027b7e3b1f-config-data" (OuterVolumeSpecName: "config-data") pod "809c1920-3205-411c-a8c1-ed027b7e3b1f" (UID: "809c1920-3205-411c-a8c1-ed027b7e3b1f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.670009 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b5ac2be-fc48-4bde-a668-b3549462a101-config-data" (OuterVolumeSpecName: "config-data") pod "4b5ac2be-fc48-4bde-a668-b3549462a101" (UID: "4b5ac2be-fc48-4bde-a668-b3549462a101"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.680624 4990 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.730530 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd4b299f-9ab6-4714-b911-9b1e11708f39-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd4b299f-9ab6-4714-b911-9b1e11708f39" (UID: "cd4b299f-9ab6-4714-b911-9b1e11708f39"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.734391 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b5ac2be-fc48-4bde-a668-b3549462a101-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b5ac2be-fc48-4bde-a668-b3549462a101" (UID: "4b5ac2be-fc48-4bde-a668-b3549462a101"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.734659 4990 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.745637 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed473a7a-f068-49a3-ae4c-b57b39e33b28-server-conf" (OuterVolumeSpecName: "server-conf") pod "ed473a7a-f068-49a3-ae4c-b57b39e33b28" (UID: "ed473a7a-f068-49a3-ae4c-b57b39e33b28"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.748036 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed473a7a-f068-49a3-ae4c-b57b39e33b28-config-data" (OuterVolumeSpecName: "config-data") pod "ed473a7a-f068-49a3-ae4c-b57b39e33b28" (UID: "ed473a7a-f068-49a3-ae4c-b57b39e33b28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.756978 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed473a7a-f068-49a3-ae4c-b57b39e33b28-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.757080 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b5ac2be-fc48-4bde-a668-b3549462a101-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.757135 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b5ac2be-fc48-4bde-a668-b3549462a101-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.757188 4990 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.757272 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd4b299f-9ab6-4714-b911-9b1e11708f39-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.757328 4990 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.757379 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.757431 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/809c1920-3205-411c-a8c1-ed027b7e3b1f-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.757505 4990 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ed473a7a-f068-49a3-ae4c-b57b39e33b28-server-conf\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.760656 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b5ac2be-fc48-4bde-a668-b3549462a101-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4b5ac2be-fc48-4bde-a668-b3549462a101" (UID: "4b5ac2be-fc48-4bde-a668-b3549462a101"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.761603 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/809c1920-3205-411c-a8c1-ed027b7e3b1f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "809c1920-3205-411c-a8c1-ed027b7e3b1f" (UID: "809c1920-3205-411c-a8c1-ed027b7e3b1f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.768255 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b5ac2be-fc48-4bde-a668-b3549462a101-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4b5ac2be-fc48-4bde-a668-b3549462a101" (UID: "4b5ac2be-fc48-4bde-a668-b3549462a101"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.768600 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00beb76a-d4d2-4cd8-bc04-e268c2397388-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "00beb76a-d4d2-4cd8-bc04-e268c2397388" (UID: "00beb76a-d4d2-4cd8-bc04-e268c2397388"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: E1205 01:35:43.781673 4990 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Dec 05 01:35:43 crc kubenswrapper[4990]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2025-12-05T01:35:36Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Dec 05 01:35:43 crc kubenswrapper[4990]: /etc/init.d/functions: line 589: 379 Alarm clock "$@" Dec 05 01:35:43 crc kubenswrapper[4990]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-nbpzw" message=< Dec 05 01:35:43 crc kubenswrapper[4990]: Exiting ovn-controller (1) [FAILED] Dec 05 01:35:43 crc kubenswrapper[4990]: Killing ovn-controller (1) [ OK ] Dec 05 01:35:43 crc kubenswrapper[4990]: Killing ovn-controller (1) with SIGKILL [ OK ] Dec 05 01:35:43 crc kubenswrapper[4990]: 2025-12-05T01:35:36Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Dec 05 01:35:43 crc kubenswrapper[4990]: /etc/init.d/functions: line 589: 379 Alarm clock "$@" Dec 05 01:35:43 crc kubenswrapper[4990]: > Dec 05 01:35:43 crc kubenswrapper[4990]: E1205 01:35:43.781709 4990 kuberuntime_container.go:691] "PreStop hook failed" err=< Dec 05 01:35:43 crc kubenswrapper[4990]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2025-12-05T01:35:36Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Dec 05 01:35:43 crc kubenswrapper[4990]: /etc/init.d/functions: line 589: 379 Alarm clock "$@" Dec 05 01:35:43 crc kubenswrapper[4990]: > pod="openstack/ovn-controller-nbpzw" podUID="d269e431-18be-4f4a-a63f-fee37cf08d46" containerName="ovn-controller" containerID="cri-o://ce37020e9ce769fcf273d3dc5584fc28db4503d6df9ad3d7d13fbe7900daa643" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.781748 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-nbpzw" podUID="d269e431-18be-4f4a-a63f-fee37cf08d46" containerName="ovn-controller" containerID="cri-o://ce37020e9ce769fcf273d3dc5584fc28db4503d6df9ad3d7d13fbe7900daa643" gracePeriod=21 Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.786631 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd4b299f-9ab6-4714-b911-9b1e11708f39-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "cd4b299f-9ab6-4714-b911-9b1e11708f39" (UID: "cd4b299f-9ab6-4714-b911-9b1e11708f39"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.794039 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "dd33dbb9-4e51-47db-8129-a93493234f7f" (UID: "dd33dbb9-4e51-47db-8129-a93493234f7f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.802305 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/809c1920-3205-411c-a8c1-ed027b7e3b1f-server-conf" (OuterVolumeSpecName: "server-conf") pod "809c1920-3205-411c-a8c1-ed027b7e3b1f" (UID: "809c1920-3205-411c-a8c1-ed027b7e3b1f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.845430 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed473a7a-f068-49a3-ae4c-b57b39e33b28-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "ed473a7a-f068-49a3-ae4c-b57b39e33b28" (UID: "ed473a7a-f068-49a3-ae4c-b57b39e33b28"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.859395 4990 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/00beb76a-d4d2-4cd8-bc04-e268c2397388-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.859423 4990 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd4b299f-9ab6-4714-b911-9b1e11708f39-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.859433 4990 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/809c1920-3205-411c-a8c1-ed027b7e3b1f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.859443 4990 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/809c1920-3205-411c-a8c1-ed027b7e3b1f-server-conf\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.859451 4990 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ed473a7a-f068-49a3-ae4c-b57b39e33b28-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.859459 4990 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b5ac2be-fc48-4bde-a668-b3549462a101-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.859467 4990 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b5ac2be-fc48-4bde-a668-b3549462a101-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.859475 4990 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.862184 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "dd33dbb9-4e51-47db-8129-a93493234f7f" (UID: "dd33dbb9-4e51-47db-8129-a93493234f7f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.941542 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10384219-030b-491b-884f-fd761eba4496" path="/var/lib/kubelet/pods/10384219-030b-491b-884f-fd761eba4496/volumes" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.942265 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c7241f3-92bb-4295-97d9-4284784b11f3" path="/var/lib/kubelet/pods/2c7241f3-92bb-4295-97d9-4284784b11f3/volumes" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.943129 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4489a490-bacc-498c-b0e3-d6b5cad13d34" path="/var/lib/kubelet/pods/4489a490-bacc-498c-b0e3-d6b5cad13d34/volumes" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.944196 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3" path="/var/lib/kubelet/pods/47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3/volumes" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.944843 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d" path="/var/lib/kubelet/pods/5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d/volumes" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.945843 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82eb03c9-869c-447d-9b78-b4ef916b59ac" path="/var/lib/kubelet/pods/82eb03c9-869c-447d-9b78-b4ef916b59ac/volumes" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.947377 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba3c2a5d-0bec-4905-8cba-d0e565643fe7" path="/var/lib/kubelet/pods/ba3c2a5d-0bec-4905-8cba-d0e565643fe7/volumes" Dec 05 01:35:43 crc kubenswrapper[4990]: I1205 01:35:43.965315 4990 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd33dbb9-4e51-47db-8129-a93493234f7f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.092063 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5dc9df8c96-j8dx7" event={"ID":"dd33dbb9-4e51-47db-8129-a93493234f7f","Type":"ContainerDied","Data":"3e9bebf78587e12677198988d037699ec4f8d730cb5ad058fa2ae4af7b220345"} Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.092466 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5dc9df8c96-j8dx7" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.110596 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapiea53-account-delete-m8d6d" event={"ID":"82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d","Type":"ContainerDied","Data":"db5ab2ad34b418acceaae86c6d848fa12ca745c5af6facdb984f6b78167d897b"} Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.110646 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db5ab2ad34b418acceaae86c6d848fa12ca745c5af6facdb984f6b78167d897b" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.115322 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-nbpzw_d269e431-18be-4f4a-a63f-fee37cf08d46/ovn-controller/0.log" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.119021 4990 generic.go:334] "Generic (PLEG): container finished" podID="d269e431-18be-4f4a-a63f-fee37cf08d46" containerID="ce37020e9ce769fcf273d3dc5584fc28db4503d6df9ad3d7d13fbe7900daa643" exitCode=137 Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.119169 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nbpzw" event={"ID":"d269e431-18be-4f4a-a63f-fee37cf08d46","Type":"ContainerDied","Data":"ce37020e9ce769fcf273d3dc5584fc28db4503d6df9ad3d7d13fbe7900daa643"} Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.126786 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement06fa-account-delete-fb874" event={"ID":"ac1cabc4-d51d-43b6-8903-f098d13c1952","Type":"ContainerDied","Data":"92f4dcfe11255c35e4976d54542faeb78216ee28cd1de89aeb76530d56bf5cf2"} Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.126826 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92f4dcfe11255c35e4976d54542faeb78216ee28cd1de89aeb76530d56bf5cf2" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.140886 4990 generic.go:334] "Generic (PLEG): container finished" podID="fecef393-81c1-4d16-af9e-3d777782dd2f" containerID="861c7f58bd75979689aba5728fe607c3673e66e27dce3331238271fb617060bd" exitCode=0 Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.140960 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-67b9d4ffcb-hc2dk" event={"ID":"fecef393-81c1-4d16-af9e-3d777782dd2f","Type":"ContainerDied","Data":"861c7f58bd75979689aba5728fe607c3673e66e27dce3331238271fb617060bd"} Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.152711 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell05b09-account-delete-wcpvd" event={"ID":"c7bf2416-2722-4ab6-a022-32116155fa68","Type":"ContainerDied","Data":"3cf8a592f6307771d8d2c338cb9026f7c6b841c65ec7edb110af8e86004c2353"} Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.152760 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cf8a592f6307771d8d2c338cb9026f7c6b841c65ec7edb110af8e86004c2353" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.168007 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cindera4a4-account-delete-kpsxn" event={"ID":"ab630416-46f3-495f-92c2-732abce81632","Type":"ContainerDied","Data":"189d5e322c0f1073b0da2f66a1183cad81f6ef385526d60e1a49b439cbc20fc0"} Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.168044 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="189d5e322c0f1073b0da2f66a1183cad81f6ef385526d60e1a49b439cbc20fc0" Dec 05 01:35:44 crc kubenswrapper[4990]: E1205 01:35:44.173546 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ce37020e9ce769fcf273d3dc5584fc28db4503d6df9ad3d7d13fbe7900daa643 is running failed: container process not found" containerID="ce37020e9ce769fcf273d3dc5584fc28db4503d6df9ad3d7d13fbe7900daa643" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Dec 05 01:35:44 crc kubenswrapper[4990]: E1205 01:35:44.174153 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ce37020e9ce769fcf273d3dc5584fc28db4503d6df9ad3d7d13fbe7900daa643 is running failed: container process not found" containerID="ce37020e9ce769fcf273d3dc5584fc28db4503d6df9ad3d7d13fbe7900daa643" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Dec 05 01:35:44 crc kubenswrapper[4990]: E1205 01:35:44.174866 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ce37020e9ce769fcf273d3dc5584fc28db4503d6df9ad3d7d13fbe7900daa643 is running failed: container process not found" containerID="ce37020e9ce769fcf273d3dc5584fc28db4503d6df9ad3d7d13fbe7900daa643" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Dec 05 01:35:44 crc kubenswrapper[4990]: E1205 01:35:44.174891 4990 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ce37020e9ce769fcf273d3dc5584fc28db4503d6df9ad3d7d13fbe7900daa643 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-nbpzw" podUID="d269e431-18be-4f4a-a63f-fee37cf08d46" containerName="ovn-controller" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.183644 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.184297 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"809c1920-3205-411c-a8c1-ed027b7e3b1f","Type":"ContainerDied","Data":"7c3a5ff8572e5e01dccbeb3b390173293ce194495e3da7f9a10d6eb438c0a293"} Dec 05 01:35:44 crc kubenswrapper[4990]: E1205 01:35:44.196984 4990 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 01:35:44 crc kubenswrapper[4990]: E1205 01:35:44.197058 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c7bf2416-2722-4ab6-a022-32116155fa68-operator-scripts podName:c7bf2416-2722-4ab6-a022-32116155fa68 nodeName:}" failed. No retries permitted until 2025-12-05 01:35:48.197037947 +0000 UTC m=+1646.573253308 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/c7bf2416-2722-4ab6-a022-32116155fa68-operator-scripts") pod "novacell05b09-account-delete-wcpvd" (UID: "c7bf2416-2722-4ab6-a022-32116155fa68") : configmap "openstack-scripts" not found Dec 05 01:35:44 crc kubenswrapper[4990]: E1205 01:35:44.197100 4990 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 01:35:44 crc kubenswrapper[4990]: E1205 01:35:44.197178 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/64bbbfd0-59f8-4fb6-8761-503cdf8b9f36-operator-scripts podName:64bbbfd0-59f8-4fb6-8761-503cdf8b9f36 nodeName:}" failed. No retries permitted until 2025-12-05 01:35:48.197161001 +0000 UTC m=+1646.573376362 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/64bbbfd0-59f8-4fb6-8761-503cdf8b9f36-operator-scripts") pod "barbicandd52-account-delete-2dsms" (UID: "64bbbfd0-59f8-4fb6-8761-503cdf8b9f36") : configmap "openstack-scripts" not found Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.209646 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbicandd52-account-delete-2dsms" event={"ID":"64bbbfd0-59f8-4fb6-8761-503cdf8b9f36","Type":"ContainerDied","Data":"ac58697418ac121fa73d0b9a29d202e88198028d1e67c9e8bcc9f85d96288120"} Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.209676 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac58697418ac121fa73d0b9a29d202e88198028d1e67c9e8bcc9f85d96288120" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.211137 4990 generic.go:334] "Generic (PLEG): container finished" podID="ba876d22-269d-46e3-8a91-24c8646d1c75" containerID="acbdcf83ea752767a3b017fe9de23c7e771233760249604ee1ef047acd8ec3f1" exitCode=0 Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.211226 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.211349 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-557cdcfdf5-b7n8x" event={"ID":"ba876d22-269d-46e3-8a91-24c8646d1c75","Type":"ContainerDied","Data":"acbdcf83ea752767a3b017fe9de23c7e771233760249604ee1ef047acd8ec3f1"} Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.211434 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.211434 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutronc36c-account-delete-nt5gn" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.211470 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.211502 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glancebd6b-account-delete-9bmrw" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.211511 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.211543 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.211626 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 01:35:44 crc kubenswrapper[4990]: E1205 01:35:44.224119 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940 is running failed: container process not found" containerID="edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 01:35:44 crc kubenswrapper[4990]: E1205 01:35:44.224401 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940 is running failed: container process not found" containerID="edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 01:35:44 crc kubenswrapper[4990]: E1205 01:35:44.224666 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940 is running failed: container process not found" containerID="edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 01:35:44 crc kubenswrapper[4990]: E1205 01:35:44.224691 4990 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-2j9fb" podUID="d833c1a0-9e88-4ad3-8bcc-5904d459903a" containerName="ovsdb-server" Dec 05 01:35:44 crc kubenswrapper[4990]: E1205 01:35:44.225557 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="88c86c95e39217d930a01ef924d917927e9b97fa3c53963b2fe430bae34fff01" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 01:35:44 crc kubenswrapper[4990]: E1205 01:35:44.227584 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="88c86c95e39217d930a01ef924d917927e9b97fa3c53963b2fe430bae34fff01" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 01:35:44 crc kubenswrapper[4990]: E1205 01:35:44.235678 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="88c86c95e39217d930a01ef924d917927e9b97fa3c53963b2fe430bae34fff01" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 01:35:44 crc kubenswrapper[4990]: E1205 01:35:44.235718 4990 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-2j9fb" podUID="d833c1a0-9e88-4ad3-8bcc-5904d459903a" containerName="ovs-vswitchd" Dec 05 01:35:44 crc kubenswrapper[4990]: E1205 01:35:44.298889 4990 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 01:35:44 crc kubenswrapper[4990]: E1205 01:35:44.299169 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ac1cabc4-d51d-43b6-8903-f098d13c1952-operator-scripts podName:ac1cabc4-d51d-43b6-8903-f098d13c1952 nodeName:}" failed. No retries permitted until 2025-12-05 01:35:48.299154667 +0000 UTC m=+1646.675370028 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/ac1cabc4-d51d-43b6-8903-f098d13c1952-operator-scripts") pod "placement06fa-account-delete-fb874" (UID: "ac1cabc4-d51d-43b6-8903-f098d13c1952") : configmap "openstack-scripts" not found Dec 05 01:35:44 crc kubenswrapper[4990]: E1205 01:35:44.298951 4990 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 01:35:44 crc kubenswrapper[4990]: E1205 01:35:44.299428 4990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d-operator-scripts podName:82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d nodeName:}" failed. No retries permitted until 2025-12-05 01:35:48.299420405 +0000 UTC m=+1646.675635766 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d-operator-scripts") pod "novaapiea53-account-delete-m8d6d" (UID: "82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d") : configmap "openstack-scripts" not found Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.490171 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbicandd52-account-delete-2dsms" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.497261 4990 scope.go:117] "RemoveContainer" containerID="48ca7bcf7c508929d069e5d6224db21799fee57e82ffabebd9ac3f8157e41ad0" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.509204 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapiea53-account-delete-m8d6d" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.521983 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cindera4a4-account-delete-kpsxn" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.534794 4990 scope.go:117] "RemoveContainer" containerID="2af88438a5f3d9524b6a0ed1c550ae8884033f86ecaf85f25e86861a398c0008" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.565370 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell05b09-account-delete-wcpvd" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.594367 4990 scope.go:117] "RemoveContainer" containerID="78ad1f85b89a93447d310403e8048ded243b7fcf6bbb3b6bcfcc95d09d0ca2a1" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.605302 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64bbbfd0-59f8-4fb6-8761-503cdf8b9f36-operator-scripts\") pod \"64bbbfd0-59f8-4fb6-8761-503cdf8b9f36\" (UID: \"64bbbfd0-59f8-4fb6-8761-503cdf8b9f36\") " Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.605337 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dpkv\" (UniqueName: \"kubernetes.io/projected/64bbbfd0-59f8-4fb6-8761-503cdf8b9f36-kube-api-access-7dpkv\") pod \"64bbbfd0-59f8-4fb6-8761-503cdf8b9f36\" (UID: \"64bbbfd0-59f8-4fb6-8761-503cdf8b9f36\") " Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.607322 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64bbbfd0-59f8-4fb6-8761-503cdf8b9f36-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "64bbbfd0-59f8-4fb6-8761-503cdf8b9f36" (UID: "64bbbfd0-59f8-4fb6-8761-503cdf8b9f36"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.621450 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64bbbfd0-59f8-4fb6-8761-503cdf8b9f36-kube-api-access-7dpkv" (OuterVolumeSpecName: "kube-api-access-7dpkv") pod "64bbbfd0-59f8-4fb6-8761-503cdf8b9f36" (UID: "64bbbfd0-59f8-4fb6-8761-503cdf8b9f36"). InnerVolumeSpecName "kube-api-access-7dpkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.646209 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement06fa-account-delete-fb874" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.662293 4990 scope.go:117] "RemoveContainer" containerID="7321a2542512be340cbfde5ad63b19280567b9288afa32dc396d30a73fafa0d8" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.701629 4990 scope.go:117] "RemoveContainer" containerID="003addad4fd04e4704b5108d770c5e33bbf4691ed94949868baed6f57737b3e3" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.708342 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcpvz\" (UniqueName: \"kubernetes.io/projected/c7bf2416-2722-4ab6-a022-32116155fa68-kube-api-access-gcpvz\") pod \"c7bf2416-2722-4ab6-a022-32116155fa68\" (UID: \"c7bf2416-2722-4ab6-a022-32116155fa68\") " Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.708396 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab630416-46f3-495f-92c2-732abce81632-operator-scripts\") pod \"ab630416-46f3-495f-92c2-732abce81632\" (UID: \"ab630416-46f3-495f-92c2-732abce81632\") " Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.708426 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7d75t\" (UniqueName: \"kubernetes.io/projected/82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d-kube-api-access-7d75t\") pod \"82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d\" (UID: \"82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d\") " Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.708465 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsvrq\" (UniqueName: \"kubernetes.io/projected/ab630416-46f3-495f-92c2-732abce81632-kube-api-access-dsvrq\") pod \"ab630416-46f3-495f-92c2-732abce81632\" (UID: \"ab630416-46f3-495f-92c2-732abce81632\") " Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.708532 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac1cabc4-d51d-43b6-8903-f098d13c1952-operator-scripts\") pod \"ac1cabc4-d51d-43b6-8903-f098d13c1952\" (UID: \"ac1cabc4-d51d-43b6-8903-f098d13c1952\") " Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.708573 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d-operator-scripts\") pod \"82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d\" (UID: \"82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d\") " Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.708605 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmp4c\" (UniqueName: \"kubernetes.io/projected/ac1cabc4-d51d-43b6-8903-f098d13c1952-kube-api-access-gmp4c\") pod \"ac1cabc4-d51d-43b6-8903-f098d13c1952\" (UID: \"ac1cabc4-d51d-43b6-8903-f098d13c1952\") " Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.708631 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7bf2416-2722-4ab6-a022-32116155fa68-operator-scripts\") pod \"c7bf2416-2722-4ab6-a022-32116155fa68\" (UID: \"c7bf2416-2722-4ab6-a022-32116155fa68\") " Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.708890 4990 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64bbbfd0-59f8-4fb6-8761-503cdf8b9f36-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.708909 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dpkv\" (UniqueName: \"kubernetes.io/projected/64bbbfd0-59f8-4fb6-8761-503cdf8b9f36-kube-api-access-7dpkv\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.709031 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab630416-46f3-495f-92c2-732abce81632-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab630416-46f3-495f-92c2-732abce81632" (UID: "ab630416-46f3-495f-92c2-732abce81632"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.709540 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac1cabc4-d51d-43b6-8903-f098d13c1952-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ac1cabc4-d51d-43b6-8903-f098d13c1952" (UID: "ac1cabc4-d51d-43b6-8903-f098d13c1952"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.710368 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d" (UID: "82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.710451 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7bf2416-2722-4ab6-a022-32116155fa68-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c7bf2416-2722-4ab6-a022-32116155fa68" (UID: "c7bf2416-2722-4ab6-a022-32116155fa68"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.714272 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac1cabc4-d51d-43b6-8903-f098d13c1952-kube-api-access-gmp4c" (OuterVolumeSpecName: "kube-api-access-gmp4c") pod "ac1cabc4-d51d-43b6-8903-f098d13c1952" (UID: "ac1cabc4-d51d-43b6-8903-f098d13c1952"). InnerVolumeSpecName "kube-api-access-gmp4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.717178 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7bf2416-2722-4ab6-a022-32116155fa68-kube-api-access-gcpvz" (OuterVolumeSpecName: "kube-api-access-gcpvz") pod "c7bf2416-2722-4ab6-a022-32116155fa68" (UID: "c7bf2416-2722-4ab6-a022-32116155fa68"). InnerVolumeSpecName "kube-api-access-gcpvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.717742 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d-kube-api-access-7d75t" (OuterVolumeSpecName: "kube-api-access-7d75t") pod "82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d" (UID: "82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d"). InnerVolumeSpecName "kube-api-access-7d75t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.720385 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab630416-46f3-495f-92c2-732abce81632-kube-api-access-dsvrq" (OuterVolumeSpecName: "kube-api-access-dsvrq") pod "ab630416-46f3-495f-92c2-732abce81632" (UID: "ab630416-46f3-495f-92c2-732abce81632"). InnerVolumeSpecName "kube-api-access-dsvrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.723332 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-nbpzw_d269e431-18be-4f4a-a63f-fee37cf08d46/ovn-controller/0.log" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.723394 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nbpzw" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.763232 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-557cdcfdf5-b7n8x" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.764641 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-67b9d4ffcb-hc2dk" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.766360 4990 scope.go:117] "RemoveContainer" containerID="2395f00493820a5ebb30a4c605fb8d5f23ada6746a45d56bd39a11683da2b3c3" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.799470 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutronc36c-account-delete-nt5gn"] Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.809976 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7d75t\" (UniqueName: \"kubernetes.io/projected/82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d-kube-api-access-7d75t\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.810035 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsvrq\" (UniqueName: \"kubernetes.io/projected/ab630416-46f3-495f-92c2-732abce81632-kube-api-access-dsvrq\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.810049 4990 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac1cabc4-d51d-43b6-8903-f098d13c1952-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.810059 4990 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.810068 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmp4c\" (UniqueName: \"kubernetes.io/projected/ac1cabc4-d51d-43b6-8903-f098d13c1952-kube-api-access-gmp4c\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.810076 4990 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7bf2416-2722-4ab6-a022-32116155fa68-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.810103 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcpvz\" (UniqueName: \"kubernetes.io/projected/c7bf2416-2722-4ab6-a022-32116155fa68-kube-api-access-gcpvz\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.810113 4990 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab630416-46f3-495f-92c2-732abce81632-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.812993 4990 scope.go:117] "RemoveContainer" containerID="ddee66ac66bbe9676ade95661263c28f7cfb48141f52a3c8dcd54d952118736b" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.827623 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutronc36c-account-delete-nt5gn"] Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.836710 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.886202 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.899227 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.901118 4990 scope.go:117] "RemoveContainer" containerID="39a3ea367ecbac2fdb7b56ed37380e3e71e8af696eebed8fe12028523b333328" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.910637 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtbw2\" (UniqueName: \"kubernetes.io/projected/d269e431-18be-4f4a-a63f-fee37cf08d46-kube-api-access-mtbw2\") pod \"d269e431-18be-4f4a-a63f-fee37cf08d46\" (UID: \"d269e431-18be-4f4a-a63f-fee37cf08d46\") " Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.910703 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba876d22-269d-46e3-8a91-24c8646d1c75-config-data-custom\") pod \"ba876d22-269d-46e3-8a91-24c8646d1c75\" (UID: \"ba876d22-269d-46e3-8a91-24c8646d1c75\") " Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.910755 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d269e431-18be-4f4a-a63f-fee37cf08d46-ovn-controller-tls-certs\") pod \"d269e431-18be-4f4a-a63f-fee37cf08d46\" (UID: \"d269e431-18be-4f4a-a63f-fee37cf08d46\") " Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.910777 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fecef393-81c1-4d16-af9e-3d777782dd2f-logs\") pod \"fecef393-81c1-4d16-af9e-3d777782dd2f\" (UID: \"fecef393-81c1-4d16-af9e-3d777782dd2f\") " Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.910831 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba876d22-269d-46e3-8a91-24c8646d1c75-logs\") pod \"ba876d22-269d-46e3-8a91-24c8646d1c75\" (UID: \"ba876d22-269d-46e3-8a91-24c8646d1c75\") " Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.910922 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba876d22-269d-46e3-8a91-24c8646d1c75-combined-ca-bundle\") pod \"ba876d22-269d-46e3-8a91-24c8646d1c75\" (UID: \"ba876d22-269d-46e3-8a91-24c8646d1c75\") " Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.910941 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d269e431-18be-4f4a-a63f-fee37cf08d46-var-run-ovn\") pod \"d269e431-18be-4f4a-a63f-fee37cf08d46\" (UID: \"d269e431-18be-4f4a-a63f-fee37cf08d46\") " Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.910984 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gffvb\" (UniqueName: \"kubernetes.io/projected/ba876d22-269d-46e3-8a91-24c8646d1c75-kube-api-access-gffvb\") pod \"ba876d22-269d-46e3-8a91-24c8646d1c75\" (UID: \"ba876d22-269d-46e3-8a91-24c8646d1c75\") " Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.911001 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d269e431-18be-4f4a-a63f-fee37cf08d46-var-log-ovn\") pod \"d269e431-18be-4f4a-a63f-fee37cf08d46\" (UID: \"d269e431-18be-4f4a-a63f-fee37cf08d46\") " Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.911050 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba876d22-269d-46e3-8a91-24c8646d1c75-config-data\") pod \"ba876d22-269d-46e3-8a91-24c8646d1c75\" (UID: \"ba876d22-269d-46e3-8a91-24c8646d1c75\") " Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.911068 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fecef393-81c1-4d16-af9e-3d777782dd2f-config-data\") pod \"fecef393-81c1-4d16-af9e-3d777782dd2f\" (UID: \"fecef393-81c1-4d16-af9e-3d777782dd2f\") " Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.911091 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pdk8\" (UniqueName: \"kubernetes.io/projected/fecef393-81c1-4d16-af9e-3d777782dd2f-kube-api-access-9pdk8\") pod \"fecef393-81c1-4d16-af9e-3d777782dd2f\" (UID: \"fecef393-81c1-4d16-af9e-3d777782dd2f\") " Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.911116 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fecef393-81c1-4d16-af9e-3d777782dd2f-config-data-custom\") pod \"fecef393-81c1-4d16-af9e-3d777782dd2f\" (UID: \"fecef393-81c1-4d16-af9e-3d777782dd2f\") " Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.911160 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d269e431-18be-4f4a-a63f-fee37cf08d46-scripts\") pod \"d269e431-18be-4f4a-a63f-fee37cf08d46\" (UID: \"d269e431-18be-4f4a-a63f-fee37cf08d46\") " Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.911182 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d269e431-18be-4f4a-a63f-fee37cf08d46-combined-ca-bundle\") pod \"d269e431-18be-4f4a-a63f-fee37cf08d46\" (UID: \"d269e431-18be-4f4a-a63f-fee37cf08d46\") " Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.911246 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fecef393-81c1-4d16-af9e-3d777782dd2f-combined-ca-bundle\") pod \"fecef393-81c1-4d16-af9e-3d777782dd2f\" (UID: \"fecef393-81c1-4d16-af9e-3d777782dd2f\") " Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.911294 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d269e431-18be-4f4a-a63f-fee37cf08d46-var-run\") pod \"d269e431-18be-4f4a-a63f-fee37cf08d46\" (UID: \"d269e431-18be-4f4a-a63f-fee37cf08d46\") " Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.911641 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d269e431-18be-4f4a-a63f-fee37cf08d46-var-run" (OuterVolumeSpecName: "var-run") pod "d269e431-18be-4f4a-a63f-fee37cf08d46" (UID: "d269e431-18be-4f4a-a63f-fee37cf08d46"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.912619 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d269e431-18be-4f4a-a63f-fee37cf08d46-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "d269e431-18be-4f4a-a63f-fee37cf08d46" (UID: "d269e431-18be-4f4a-a63f-fee37cf08d46"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.914809 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d269e431-18be-4f4a-a63f-fee37cf08d46-kube-api-access-mtbw2" (OuterVolumeSpecName: "kube-api-access-mtbw2") pod "d269e431-18be-4f4a-a63f-fee37cf08d46" (UID: "d269e431-18be-4f4a-a63f-fee37cf08d46"). InnerVolumeSpecName "kube-api-access-mtbw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.915696 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fecef393-81c1-4d16-af9e-3d777782dd2f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fecef393-81c1-4d16-af9e-3d777782dd2f" (UID: "fecef393-81c1-4d16-af9e-3d777782dd2f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.918985 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba876d22-269d-46e3-8a91-24c8646d1c75-logs" (OuterVolumeSpecName: "logs") pod "ba876d22-269d-46e3-8a91-24c8646d1c75" (UID: "ba876d22-269d-46e3-8a91-24c8646d1c75"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.919071 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d269e431-18be-4f4a-a63f-fee37cf08d46-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "d269e431-18be-4f4a-a63f-fee37cf08d46" (UID: "d269e431-18be-4f4a-a63f-fee37cf08d46"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.919602 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d269e431-18be-4f4a-a63f-fee37cf08d46-scripts" (OuterVolumeSpecName: "scripts") pod "d269e431-18be-4f4a-a63f-fee37cf08d46" (UID: "d269e431-18be-4f4a-a63f-fee37cf08d46"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.920229 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.920702 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fecef393-81c1-4d16-af9e-3d777782dd2f-logs" (OuterVolumeSpecName: "logs") pod "fecef393-81c1-4d16-af9e-3d777782dd2f" (UID: "fecef393-81c1-4d16-af9e-3d777782dd2f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.922226 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba876d22-269d-46e3-8a91-24c8646d1c75-kube-api-access-gffvb" (OuterVolumeSpecName: "kube-api-access-gffvb") pod "ba876d22-269d-46e3-8a91-24c8646d1c75" (UID: "ba876d22-269d-46e3-8a91-24c8646d1c75"). InnerVolumeSpecName "kube-api-access-gffvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.924951 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba876d22-269d-46e3-8a91-24c8646d1c75-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ba876d22-269d-46e3-8a91-24c8646d1c75" (UID: "ba876d22-269d-46e3-8a91-24c8646d1c75"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.924993 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.932711 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5dc9df8c96-j8dx7"] Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.944393 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-5dc9df8c96-j8dx7"] Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.945637 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fecef393-81c1-4d16-af9e-3d777782dd2f-kube-api-access-9pdk8" (OuterVolumeSpecName: "kube-api-access-9pdk8") pod "fecef393-81c1-4d16-af9e-3d777782dd2f" (UID: "fecef393-81c1-4d16-af9e-3d777782dd2f"). InnerVolumeSpecName "kube-api-access-9pdk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.952202 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.953120 4990 scope.go:117] "RemoveContainer" containerID="5f5960287e71d7a833bacd70a7ad0510d80b6d222b74bb7c1aa36b55923710c9" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.953353 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1847f2cb-e2fb-4dc0-8f4b-bf6e43212454/ovn-northd/0.log" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.953460 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.959632 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.964133 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d269e431-18be-4f4a-a63f-fee37cf08d46-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d269e431-18be-4f4a-a63f-fee37cf08d46" (UID: "d269e431-18be-4f4a-a63f-fee37cf08d46"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.967872 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glancebd6b-account-delete-9bmrw"] Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.975163 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glancebd6b-account-delete-9bmrw"] Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.981547 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.982827 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba876d22-269d-46e3-8a91-24c8646d1c75-config-data" (OuterVolumeSpecName: "config-data") pod "ba876d22-269d-46e3-8a91-24c8646d1c75" (UID: "ba876d22-269d-46e3-8a91-24c8646d1c75"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.985473 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba876d22-269d-46e3-8a91-24c8646d1c75-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba876d22-269d-46e3-8a91-24c8646d1c75" (UID: "ba876d22-269d-46e3-8a91-24c8646d1c75"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.988880 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.992847 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fecef393-81c1-4d16-af9e-3d777782dd2f-config-data" (OuterVolumeSpecName: "config-data") pod "fecef393-81c1-4d16-af9e-3d777782dd2f" (UID: "fecef393-81c1-4d16-af9e-3d777782dd2f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:44 crc kubenswrapper[4990]: I1205 01:35:44.994444 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.000605 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.005597 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.005816 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fecef393-81c1-4d16-af9e-3d777782dd2f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fecef393-81c1-4d16-af9e-3d777782dd2f" (UID: "fecef393-81c1-4d16-af9e-3d777782dd2f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.011434 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.012171 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9xzj\" (UniqueName: \"kubernetes.io/projected/426a0569-3dcd-4f28-9556-d4be5f1bdc18-kube-api-access-b9xzj\") pod \"426a0569-3dcd-4f28-9556-d4be5f1bdc18\" (UID: \"426a0569-3dcd-4f28-9556-d4be5f1bdc18\") " Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.012227 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426a0569-3dcd-4f28-9556-d4be5f1bdc18-combined-ca-bundle\") pod \"426a0569-3dcd-4f28-9556-d4be5f1bdc18\" (UID: \"426a0569-3dcd-4f28-9556-d4be5f1bdc18\") " Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.012388 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426a0569-3dcd-4f28-9556-d4be5f1bdc18-config-data\") pod \"426a0569-3dcd-4f28-9556-d4be5f1bdc18\" (UID: \"426a0569-3dcd-4f28-9556-d4be5f1bdc18\") " Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.012768 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba876d22-269d-46e3-8a91-24c8646d1c75-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.012792 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fecef393-81c1-4d16-af9e-3d777782dd2f-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.012805 4990 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fecef393-81c1-4d16-af9e-3d777782dd2f-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.012820 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pdk8\" (UniqueName: \"kubernetes.io/projected/fecef393-81c1-4d16-af9e-3d777782dd2f-kube-api-access-9pdk8\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.012832 4990 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d269e431-18be-4f4a-a63f-fee37cf08d46-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.012842 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d269e431-18be-4f4a-a63f-fee37cf08d46-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.012852 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fecef393-81c1-4d16-af9e-3d777782dd2f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.012862 4990 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d269e431-18be-4f4a-a63f-fee37cf08d46-var-run\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.012873 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtbw2\" (UniqueName: \"kubernetes.io/projected/d269e431-18be-4f4a-a63f-fee37cf08d46-kube-api-access-mtbw2\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.012883 4990 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba876d22-269d-46e3-8a91-24c8646d1c75-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.012895 4990 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fecef393-81c1-4d16-af9e-3d777782dd2f-logs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.012905 4990 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba876d22-269d-46e3-8a91-24c8646d1c75-logs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.012915 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba876d22-269d-46e3-8a91-24c8646d1c75-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.012964 4990 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d269e431-18be-4f4a-a63f-fee37cf08d46-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.012976 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gffvb\" (UniqueName: \"kubernetes.io/projected/ba876d22-269d-46e3-8a91-24c8646d1c75-kube-api-access-gffvb\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.012987 4990 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d269e431-18be-4f4a-a63f-fee37cf08d46-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.015527 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/426a0569-3dcd-4f28-9556-d4be5f1bdc18-kube-api-access-b9xzj" (OuterVolumeSpecName: "kube-api-access-b9xzj") pod "426a0569-3dcd-4f28-9556-d4be5f1bdc18" (UID: "426a0569-3dcd-4f28-9556-d4be5f1bdc18"). InnerVolumeSpecName "kube-api-access-b9xzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.020197 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.025116 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.030737 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d269e431-18be-4f4a-a63f-fee37cf08d46-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "d269e431-18be-4f4a-a63f-fee37cf08d46" (UID: "d269e431-18be-4f4a-a63f-fee37cf08d46"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.036422 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/426a0569-3dcd-4f28-9556-d4be5f1bdc18-config-data" (OuterVolumeSpecName: "config-data") pod "426a0569-3dcd-4f28-9556-d4be5f1bdc18" (UID: "426a0569-3dcd-4f28-9556-d4be5f1bdc18"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.036995 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/426a0569-3dcd-4f28-9556-d4be5f1bdc18-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "426a0569-3dcd-4f28-9556-d4be5f1bdc18" (UID: "426a0569-3dcd-4f28-9556-d4be5f1bdc18"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.114743 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-ovn-rundir\") pod \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\" (UID: \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\") " Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.114829 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-combined-ca-bundle\") pod \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\" (UID: \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\") " Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.114908 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-scripts\") pod \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\" (UID: \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\") " Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.114951 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lt7xs\" (UniqueName: \"kubernetes.io/projected/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-kube-api-access-lt7xs\") pod \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\" (UID: \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\") " Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.115089 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-metrics-certs-tls-certs\") pod \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\" (UID: \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\") " Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.115118 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-config\") pod \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\" (UID: \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\") " Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.115259 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-ovn-northd-tls-certs\") pod \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\" (UID: \"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454\") " Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.115910 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-scripts" (OuterVolumeSpecName: "scripts") pod "1847f2cb-e2fb-4dc0-8f4b-bf6e43212454" (UID: "1847f2cb-e2fb-4dc0-8f4b-bf6e43212454"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.116002 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "1847f2cb-e2fb-4dc0-8f4b-bf6e43212454" (UID: "1847f2cb-e2fb-4dc0-8f4b-bf6e43212454"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.116095 4990 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d269e431-18be-4f4a-a63f-fee37cf08d46-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.116114 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426a0569-3dcd-4f28-9556-d4be5f1bdc18-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.116126 4990 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-ovn-rundir\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.116136 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9xzj\" (UniqueName: \"kubernetes.io/projected/426a0569-3dcd-4f28-9556-d4be5f1bdc18-kube-api-access-b9xzj\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.116148 4990 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.116221 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426a0569-3dcd-4f28-9556-d4be5f1bdc18-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.116709 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-config" (OuterVolumeSpecName: "config") pod "1847f2cb-e2fb-4dc0-8f4b-bf6e43212454" (UID: "1847f2cb-e2fb-4dc0-8f4b-bf6e43212454"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.119052 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-kube-api-access-lt7xs" (OuterVolumeSpecName: "kube-api-access-lt7xs") pod "1847f2cb-e2fb-4dc0-8f4b-bf6e43212454" (UID: "1847f2cb-e2fb-4dc0-8f4b-bf6e43212454"). InnerVolumeSpecName "kube-api-access-lt7xs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.147808 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1847f2cb-e2fb-4dc0-8f4b-bf6e43212454" (UID: "1847f2cb-e2fb-4dc0-8f4b-bf6e43212454"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.172929 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "1847f2cb-e2fb-4dc0-8f4b-bf6e43212454" (UID: "1847f2cb-e2fb-4dc0-8f4b-bf6e43212454"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.186973 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "1847f2cb-e2fb-4dc0-8f4b-bf6e43212454" (UID: "1847f2cb-e2fb-4dc0-8f4b-bf6e43212454"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.217120 4990 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.217146 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.217157 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lt7xs\" (UniqueName: \"kubernetes.io/projected/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-kube-api-access-lt7xs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.217167 4990 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.217176 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.240117 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-nbpzw_d269e431-18be-4f4a-a63f-fee37cf08d46/ovn-controller/0.log" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.240183 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nbpzw" event={"ID":"d269e431-18be-4f4a-a63f-fee37cf08d46","Type":"ContainerDied","Data":"4d7e5f2ffb0fffe4802f5f2db67dd63212cdb0ef605c28834e87bfe10bde9595"} Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.240286 4990 scope.go:117] "RemoveContainer" containerID="ce37020e9ce769fcf273d3dc5584fc28db4503d6df9ad3d7d13fbe7900daa643" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.240381 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nbpzw" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.246230 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell05b09-account-delete-wcpvd" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.251678 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cindera4a4-account-delete-kpsxn" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.255836 4990 generic.go:334] "Generic (PLEG): container finished" podID="c8cbf17b-4408-40ea-81bd-c70478cf6095" containerID="caed40083fa597fe943a30fe27bc0e925ac084161f972ab054dae2a9368983ca" exitCode=0 Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.255905 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8cbf17b-4408-40ea-81bd-c70478cf6095","Type":"ContainerDied","Data":"caed40083fa597fe943a30fe27bc0e925ac084161f972ab054dae2a9368983ca"} Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.298614 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbicandd52-account-delete-2dsms" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.302618 4990 generic.go:334] "Generic (PLEG): container finished" podID="426a0569-3dcd-4f28-9556-d4be5f1bdc18" containerID="4c1a4d82fc529e57f60fa033b1bfcb08aa397f1149fd6f70c841640e8abdbde8" exitCode=0 Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.302997 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"426a0569-3dcd-4f28-9556-d4be5f1bdc18","Type":"ContainerDied","Data":"4c1a4d82fc529e57f60fa033b1bfcb08aa397f1149fd6f70c841640e8abdbde8"} Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.303040 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"426a0569-3dcd-4f28-9556-d4be5f1bdc18","Type":"ContainerDied","Data":"c205db4ebc5778e52009e312ac021ae0ac38f74fd09112d5bace0726e992ee93"} Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.303069 4990 scope.go:117] "RemoveContainer" containerID="4c1a4d82fc529e57f60fa033b1bfcb08aa397f1149fd6f70c841640e8abdbde8" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.303431 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.308024 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.312187 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapiea53-account-delete-m8d6d" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.330293 4990 scope.go:117] "RemoveContainer" containerID="4c1a4d82fc529e57f60fa033b1bfcb08aa397f1149fd6f70c841640e8abdbde8" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.331089 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement06fa-account-delete-fb874" Dec 05 01:35:45 crc kubenswrapper[4990]: E1205 01:35:45.331262 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c1a4d82fc529e57f60fa033b1bfcb08aa397f1149fd6f70c841640e8abdbde8\": container with ID starting with 4c1a4d82fc529e57f60fa033b1bfcb08aa397f1149fd6f70c841640e8abdbde8 not found: ID does not exist" containerID="4c1a4d82fc529e57f60fa033b1bfcb08aa397f1149fd6f70c841640e8abdbde8" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.331301 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c1a4d82fc529e57f60fa033b1bfcb08aa397f1149fd6f70c841640e8abdbde8"} err="failed to get container status \"4c1a4d82fc529e57f60fa033b1bfcb08aa397f1149fd6f70c841640e8abdbde8\": rpc error: code = NotFound desc = could not find container \"4c1a4d82fc529e57f60fa033b1bfcb08aa397f1149fd6f70c841640e8abdbde8\": container with ID starting with 4c1a4d82fc529e57f60fa033b1bfcb08aa397f1149fd6f70c841640e8abdbde8 not found: ID does not exist" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.347140 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell05b09-account-delete-wcpvd"] Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.348338 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-557cdcfdf5-b7n8x" event={"ID":"ba876d22-269d-46e3-8a91-24c8646d1c75","Type":"ContainerDied","Data":"4a026481cbb4f1521571d3ccaddde62af575ca56c76ba546208570b5cc6e8417"} Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.348396 4990 scope.go:117] "RemoveContainer" containerID="acbdcf83ea752767a3b017fe9de23c7e771233760249604ee1ef047acd8ec3f1" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.348539 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-557cdcfdf5-b7n8x" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.352574 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-67b9d4ffcb-hc2dk" event={"ID":"fecef393-81c1-4d16-af9e-3d777782dd2f","Type":"ContainerDied","Data":"a9b42b0996a458efb5b9995ea1b4fbefc0b33a04d03b9a59af1c896dd0e2fc9c"} Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.352758 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-67b9d4ffcb-hc2dk" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.354875 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novacell05b09-account-delete-wcpvd"] Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.363332 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1847f2cb-e2fb-4dc0-8f4b-bf6e43212454/ovn-northd/0.log" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.363397 4990 generic.go:334] "Generic (PLEG): container finished" podID="1847f2cb-e2fb-4dc0-8f4b-bf6e43212454" containerID="77eda99de79f1606c252cb06ece67f2f6f226ccf89000f0de068f41aaab2a00c" exitCode=139 Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.363429 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454","Type":"ContainerDied","Data":"77eda99de79f1606c252cb06ece67f2f6f226ccf89000f0de068f41aaab2a00c"} Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.363531 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1847f2cb-e2fb-4dc0-8f4b-bf6e43212454","Type":"ContainerDied","Data":"94f2c6137a1248cd5c311d43aa64ecf777baa66c13d77a820ff3f8866c28f34e"} Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.363627 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.381990 4990 scope.go:117] "RemoveContainer" containerID="fb190cc3a575d8d3e5ae585358a9c8f90d298cdbc882ba5af1329dfb10d6b5c0" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.410778 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-nbpzw"] Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.418643 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-nbpzw"] Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.419806 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8cbf17b-4408-40ea-81bd-c70478cf6095-run-httpd\") pod \"c8cbf17b-4408-40ea-81bd-c70478cf6095\" (UID: \"c8cbf17b-4408-40ea-81bd-c70478cf6095\") " Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.419879 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8cbf17b-4408-40ea-81bd-c70478cf6095-config-data\") pod \"c8cbf17b-4408-40ea-81bd-c70478cf6095\" (UID: \"c8cbf17b-4408-40ea-81bd-c70478cf6095\") " Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.419918 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8cbf17b-4408-40ea-81bd-c70478cf6095-log-httpd\") pod \"c8cbf17b-4408-40ea-81bd-c70478cf6095\" (UID: \"c8cbf17b-4408-40ea-81bd-c70478cf6095\") " Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.420005 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cbf17b-4408-40ea-81bd-c70478cf6095-ceilometer-tls-certs\") pod \"c8cbf17b-4408-40ea-81bd-c70478cf6095\" (UID: \"c8cbf17b-4408-40ea-81bd-c70478cf6095\") " Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.420043 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8cbf17b-4408-40ea-81bd-c70478cf6095-combined-ca-bundle\") pod \"c8cbf17b-4408-40ea-81bd-c70478cf6095\" (UID: \"c8cbf17b-4408-40ea-81bd-c70478cf6095\") " Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.420601 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8cbf17b-4408-40ea-81bd-c70478cf6095-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c8cbf17b-4408-40ea-81bd-c70478cf6095" (UID: "c8cbf17b-4408-40ea-81bd-c70478cf6095"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.420722 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8cbf17b-4408-40ea-81bd-c70478cf6095-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c8cbf17b-4408-40ea-81bd-c70478cf6095" (UID: "c8cbf17b-4408-40ea-81bd-c70478cf6095"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.421059 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c8cbf17b-4408-40ea-81bd-c70478cf6095-sg-core-conf-yaml\") pod \"c8cbf17b-4408-40ea-81bd-c70478cf6095\" (UID: \"c8cbf17b-4408-40ea-81bd-c70478cf6095\") " Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.421118 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vcx8\" (UniqueName: \"kubernetes.io/projected/c8cbf17b-4408-40ea-81bd-c70478cf6095-kube-api-access-7vcx8\") pod \"c8cbf17b-4408-40ea-81bd-c70478cf6095\" (UID: \"c8cbf17b-4408-40ea-81bd-c70478cf6095\") " Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.421170 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8cbf17b-4408-40ea-81bd-c70478cf6095-scripts\") pod \"c8cbf17b-4408-40ea-81bd-c70478cf6095\" (UID: \"c8cbf17b-4408-40ea-81bd-c70478cf6095\") " Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.421419 4990 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8cbf17b-4408-40ea-81bd-c70478cf6095-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.421435 4990 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8cbf17b-4408-40ea-81bd-c70478cf6095-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.438180 4990 scope.go:117] "RemoveContainer" containerID="861c7f58bd75979689aba5728fe607c3673e66e27dce3331238271fb617060bd" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.443005 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbicandd52-account-delete-2dsms"] Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.443697 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8cbf17b-4408-40ea-81bd-c70478cf6095-kube-api-access-7vcx8" (OuterVolumeSpecName: "kube-api-access-7vcx8") pod "c8cbf17b-4408-40ea-81bd-c70478cf6095" (UID: "c8cbf17b-4408-40ea-81bd-c70478cf6095"). InnerVolumeSpecName "kube-api-access-7vcx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.445631 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8cbf17b-4408-40ea-81bd-c70478cf6095-scripts" (OuterVolumeSpecName: "scripts") pod "c8cbf17b-4408-40ea-81bd-c70478cf6095" (UID: "c8cbf17b-4408-40ea-81bd-c70478cf6095"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.452528 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbicandd52-account-delete-2dsms"] Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.463045 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.471166 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.471938 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8cbf17b-4408-40ea-81bd-c70478cf6095-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c8cbf17b-4408-40ea-81bd-c70478cf6095" (UID: "c8cbf17b-4408-40ea-81bd-c70478cf6095"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.479738 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-67b9d4ffcb-hc2dk"] Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.487640 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-67b9d4ffcb-hc2dk"] Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.496161 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cindera4a4-account-delete-kpsxn"] Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.502457 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8cbf17b-4408-40ea-81bd-c70478cf6095-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c8cbf17b-4408-40ea-81bd-c70478cf6095" (UID: "c8cbf17b-4408-40ea-81bd-c70478cf6095"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.506463 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cindera4a4-account-delete-kpsxn"] Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.514005 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.522218 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8cbf17b-4408-40ea-81bd-c70478cf6095-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8cbf17b-4408-40ea-81bd-c70478cf6095" (UID: "c8cbf17b-4408-40ea-81bd-c70478cf6095"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.522873 4990 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cbf17b-4408-40ea-81bd-c70478cf6095-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.522913 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8cbf17b-4408-40ea-81bd-c70478cf6095-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.522923 4990 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c8cbf17b-4408-40ea-81bd-c70478cf6095-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.522931 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vcx8\" (UniqueName: \"kubernetes.io/projected/c8cbf17b-4408-40ea-81bd-c70478cf6095-kube-api-access-7vcx8\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.522940 4990 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8cbf17b-4408-40ea-81bd-c70478cf6095-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.526299 4990 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="510e9e75-fc35-4bed-8e71-c6e27069f50a" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.188:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.526691 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.539312 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement06fa-account-delete-fb874"] Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.542847 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8cbf17b-4408-40ea-81bd-c70478cf6095-config-data" (OuterVolumeSpecName: "config-data") pod "c8cbf17b-4408-40ea-81bd-c70478cf6095" (UID: "c8cbf17b-4408-40ea-81bd-c70478cf6095"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.545376 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement06fa-account-delete-fb874"] Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.549816 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-557cdcfdf5-b7n8x"] Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.560143 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-557cdcfdf5-b7n8x"] Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.582324 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapiea53-account-delete-m8d6d"] Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.589932 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novaapiea53-account-delete-m8d6d"] Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.609988 4990 scope.go:117] "RemoveContainer" containerID="f737002671cee197ff06fe034d7cff773e243d6fb39490186391618718dde0ca" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.625298 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8cbf17b-4408-40ea-81bd-c70478cf6095-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.635017 4990 scope.go:117] "RemoveContainer" containerID="0b1346f688be23b450be3772b345fd19bcc9573a72afce9dae7e5f33c22320e8" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.656205 4990 scope.go:117] "RemoveContainer" containerID="77eda99de79f1606c252cb06ece67f2f6f226ccf89000f0de068f41aaab2a00c" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.674441 4990 scope.go:117] "RemoveContainer" containerID="0b1346f688be23b450be3772b345fd19bcc9573a72afce9dae7e5f33c22320e8" Dec 05 01:35:45 crc kubenswrapper[4990]: E1205 01:35:45.675060 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b1346f688be23b450be3772b345fd19bcc9573a72afce9dae7e5f33c22320e8\": container with ID starting with 0b1346f688be23b450be3772b345fd19bcc9573a72afce9dae7e5f33c22320e8 not found: ID does not exist" containerID="0b1346f688be23b450be3772b345fd19bcc9573a72afce9dae7e5f33c22320e8" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.675123 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b1346f688be23b450be3772b345fd19bcc9573a72afce9dae7e5f33c22320e8"} err="failed to get container status \"0b1346f688be23b450be3772b345fd19bcc9573a72afce9dae7e5f33c22320e8\": rpc error: code = NotFound desc = could not find container \"0b1346f688be23b450be3772b345fd19bcc9573a72afce9dae7e5f33c22320e8\": container with ID starting with 0b1346f688be23b450be3772b345fd19bcc9573a72afce9dae7e5f33c22320e8 not found: ID does not exist" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.675161 4990 scope.go:117] "RemoveContainer" containerID="77eda99de79f1606c252cb06ece67f2f6f226ccf89000f0de068f41aaab2a00c" Dec 05 01:35:45 crc kubenswrapper[4990]: E1205 01:35:45.675701 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77eda99de79f1606c252cb06ece67f2f6f226ccf89000f0de068f41aaab2a00c\": container with ID starting with 77eda99de79f1606c252cb06ece67f2f6f226ccf89000f0de068f41aaab2a00c not found: ID does not exist" containerID="77eda99de79f1606c252cb06ece67f2f6f226ccf89000f0de068f41aaab2a00c" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.675740 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77eda99de79f1606c252cb06ece67f2f6f226ccf89000f0de068f41aaab2a00c"} err="failed to get container status \"77eda99de79f1606c252cb06ece67f2f6f226ccf89000f0de068f41aaab2a00c\": rpc error: code = NotFound desc = could not find container \"77eda99de79f1606c252cb06ece67f2f6f226ccf89000f0de068f41aaab2a00c\": container with ID starting with 77eda99de79f1606c252cb06ece67f2f6f226ccf89000f0de068f41aaab2a00c not found: ID does not exist" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.945192 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00beb76a-d4d2-4cd8-bc04-e268c2397388" path="/var/lib/kubelet/pods/00beb76a-d4d2-4cd8-bc04-e268c2397388/volumes" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.945959 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d7643ce-5dd7-48dc-9023-6502e5b0a05a" path="/var/lib/kubelet/pods/0d7643ce-5dd7-48dc-9023-6502e5b0a05a/volumes" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.946470 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dc80822-8cd5-4004-abdd-160ad6dcdd72" path="/var/lib/kubelet/pods/0dc80822-8cd5-4004-abdd-160ad6dcdd72/volumes" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.947718 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1847f2cb-e2fb-4dc0-8f4b-bf6e43212454" path="/var/lib/kubelet/pods/1847f2cb-e2fb-4dc0-8f4b-bf6e43212454/volumes" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.948573 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23fef2f1-b3e2-4d6f-8beb-efd01386d758" path="/var/lib/kubelet/pods/23fef2f1-b3e2-4d6f-8beb-efd01386d758/volumes" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.949117 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="426a0569-3dcd-4f28-9556-d4be5f1bdc18" path="/var/lib/kubelet/pods/426a0569-3dcd-4f28-9556-d4be5f1bdc18/volumes" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.950253 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b5ac2be-fc48-4bde-a668-b3549462a101" path="/var/lib/kubelet/pods/4b5ac2be-fc48-4bde-a668-b3549462a101/volumes" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.950846 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64bbbfd0-59f8-4fb6-8761-503cdf8b9f36" path="/var/lib/kubelet/pods/64bbbfd0-59f8-4fb6-8761-503cdf8b9f36/volumes" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.951599 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="809c1920-3205-411c-a8c1-ed027b7e3b1f" path="/var/lib/kubelet/pods/809c1920-3205-411c-a8c1-ed027b7e3b1f/volumes" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.955737 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d" path="/var/lib/kubelet/pods/82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d/volumes" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.956254 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab630416-46f3-495f-92c2-732abce81632" path="/var/lib/kubelet/pods/ab630416-46f3-495f-92c2-732abce81632/volumes" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.956809 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac1cabc4-d51d-43b6-8903-f098d13c1952" path="/var/lib/kubelet/pods/ac1cabc4-d51d-43b6-8903-f098d13c1952/volumes" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.958056 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b325e8cb-5fb2-4543-ad3c-c9f42a4572f0" path="/var/lib/kubelet/pods/b325e8cb-5fb2-4543-ad3c-c9f42a4572f0/volumes" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.958610 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba876d22-269d-46e3-8a91-24c8646d1c75" path="/var/lib/kubelet/pods/ba876d22-269d-46e3-8a91-24c8646d1c75/volumes" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.959175 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7bf2416-2722-4ab6-a022-32116155fa68" path="/var/lib/kubelet/pods/c7bf2416-2722-4ab6-a022-32116155fa68/volumes" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.960159 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd4b299f-9ab6-4714-b911-9b1e11708f39" path="/var/lib/kubelet/pods/cd4b299f-9ab6-4714-b911-9b1e11708f39/volumes" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.960819 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d269e431-18be-4f4a-a63f-fee37cf08d46" path="/var/lib/kubelet/pods/d269e431-18be-4f4a-a63f-fee37cf08d46/volumes" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.961329 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd33dbb9-4e51-47db-8129-a93493234f7f" path="/var/lib/kubelet/pods/dd33dbb9-4e51-47db-8129-a93493234f7f/volumes" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.962673 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed473a7a-f068-49a3-ae4c-b57b39e33b28" path="/var/lib/kubelet/pods/ed473a7a-f068-49a3-ae4c-b57b39e33b28/volumes" Dec 05 01:35:45 crc kubenswrapper[4990]: I1205 01:35:45.963260 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fecef393-81c1-4d16-af9e-3d777782dd2f" path="/var/lib/kubelet/pods/fecef393-81c1-4d16-af9e-3d777782dd2f/volumes" Dec 05 01:35:46 crc kubenswrapper[4990]: I1205 01:35:46.392590 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8cbf17b-4408-40ea-81bd-c70478cf6095","Type":"ContainerDied","Data":"4dba06b72fa8e7ff768cc233f523c342e0a6f9862022e329cf2dbd8bf4a341ea"} Dec 05 01:35:46 crc kubenswrapper[4990]: I1205 01:35:46.392918 4990 scope.go:117] "RemoveContainer" containerID="60607d382cd8bda26a5778bed70f82be69af7e7f24f195984c6f642727d62e2c" Dec 05 01:35:46 crc kubenswrapper[4990]: I1205 01:35:46.392624 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 01:35:46 crc kubenswrapper[4990]: I1205 01:35:46.420707 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:35:46 crc kubenswrapper[4990]: I1205 01:35:46.427807 4990 scope.go:117] "RemoveContainer" containerID="72b6662721b483f07b892ef1907c45dc83be4d09b4ac2d3ef321bc8da7ab9d10" Dec 05 01:35:46 crc kubenswrapper[4990]: I1205 01:35:46.428386 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 01:35:46 crc kubenswrapper[4990]: I1205 01:35:46.452062 4990 scope.go:117] "RemoveContainer" containerID="caed40083fa597fe943a30fe27bc0e925ac084161f972ab054dae2a9368983ca" Dec 05 01:35:46 crc kubenswrapper[4990]: I1205 01:35:46.474723 4990 scope.go:117] "RemoveContainer" containerID="31ce6ea891092f920fafd58685b5970d0c8960a1faf1c70db62854e2178e153a" Dec 05 01:35:47 crc kubenswrapper[4990]: I1205 01:35:47.942942 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8cbf17b-4408-40ea-81bd-c70478cf6095" path="/var/lib/kubelet/pods/c8cbf17b-4408-40ea-81bd-c70478cf6095/volumes" Dec 05 01:35:49 crc kubenswrapper[4990]: E1205 01:35:49.224525 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940 is running failed: container process not found" containerID="edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 01:35:49 crc kubenswrapper[4990]: E1205 01:35:49.226402 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="88c86c95e39217d930a01ef924d917927e9b97fa3c53963b2fe430bae34fff01" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 01:35:49 crc kubenswrapper[4990]: E1205 01:35:49.227024 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940 is running failed: container process not found" containerID="edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 01:35:49 crc kubenswrapper[4990]: E1205 01:35:49.227624 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940 is running failed: container process not found" containerID="edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 01:35:49 crc kubenswrapper[4990]: E1205 01:35:49.227687 4990 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-2j9fb" podUID="d833c1a0-9e88-4ad3-8bcc-5904d459903a" containerName="ovsdb-server" Dec 05 01:35:49 crc kubenswrapper[4990]: E1205 01:35:49.230463 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="88c86c95e39217d930a01ef924d917927e9b97fa3c53963b2fe430bae34fff01" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 01:35:49 crc kubenswrapper[4990]: E1205 01:35:49.232458 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="88c86c95e39217d930a01ef924d917927e9b97fa3c53963b2fe430bae34fff01" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 01:35:49 crc kubenswrapper[4990]: E1205 01:35:49.232579 4990 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-2j9fb" podUID="d833c1a0-9e88-4ad3-8bcc-5904d459903a" containerName="ovs-vswitchd" Dec 05 01:35:50 crc kubenswrapper[4990]: E1205 01:35:50.910552 4990 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60d8e2e9_244e_48b4_b99f_2606dc492482.slice/crio-eac148450cb53affd7b9d7676017888bdad6962caf67e03295f4aca0531c7ef4.scope\": RecentStats: unable to find data in memory cache]" Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.290879 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7fdcd7bc79-skn69" Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.437429 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-public-tls-certs\") pod \"60d8e2e9-244e-48b4-b99f-2606dc492482\" (UID: \"60d8e2e9-244e-48b4-b99f-2606dc492482\") " Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.437915 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-ovndb-tls-certs\") pod \"60d8e2e9-244e-48b4-b99f-2606dc492482\" (UID: \"60d8e2e9-244e-48b4-b99f-2606dc492482\") " Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.437993 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-internal-tls-certs\") pod \"60d8e2e9-244e-48b4-b99f-2606dc492482\" (UID: \"60d8e2e9-244e-48b4-b99f-2606dc492482\") " Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.438099 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-combined-ca-bundle\") pod \"60d8e2e9-244e-48b4-b99f-2606dc492482\" (UID: \"60d8e2e9-244e-48b4-b99f-2606dc492482\") " Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.438143 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-httpd-config\") pod \"60d8e2e9-244e-48b4-b99f-2606dc492482\" (UID: \"60d8e2e9-244e-48b4-b99f-2606dc492482\") " Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.438161 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-config\") pod \"60d8e2e9-244e-48b4-b99f-2606dc492482\" (UID: \"60d8e2e9-244e-48b4-b99f-2606dc492482\") " Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.438196 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dr24p\" (UniqueName: \"kubernetes.io/projected/60d8e2e9-244e-48b4-b99f-2606dc492482-kube-api-access-dr24p\") pod \"60d8e2e9-244e-48b4-b99f-2606dc492482\" (UID: \"60d8e2e9-244e-48b4-b99f-2606dc492482\") " Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.465928 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "60d8e2e9-244e-48b4-b99f-2606dc492482" (UID: "60d8e2e9-244e-48b4-b99f-2606dc492482"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.490405 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60d8e2e9-244e-48b4-b99f-2606dc492482-kube-api-access-dr24p" (OuterVolumeSpecName: "kube-api-access-dr24p") pod "60d8e2e9-244e-48b4-b99f-2606dc492482" (UID: "60d8e2e9-244e-48b4-b99f-2606dc492482"). InnerVolumeSpecName "kube-api-access-dr24p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.503993 4990 generic.go:334] "Generic (PLEG): container finished" podID="60d8e2e9-244e-48b4-b99f-2606dc492482" containerID="eac148450cb53affd7b9d7676017888bdad6962caf67e03295f4aca0531c7ef4" exitCode=0 Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.504064 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7fdcd7bc79-skn69" event={"ID":"60d8e2e9-244e-48b4-b99f-2606dc492482","Type":"ContainerDied","Data":"eac148450cb53affd7b9d7676017888bdad6962caf67e03295f4aca0531c7ef4"} Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.504108 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7fdcd7bc79-skn69" Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.504137 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7fdcd7bc79-skn69" event={"ID":"60d8e2e9-244e-48b4-b99f-2606dc492482","Type":"ContainerDied","Data":"209a2976d8ce4f163ae305fe2afc6542db4baa1c285f4f5db32237e0ce4eaf53"} Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.504176 4990 scope.go:117] "RemoveContainer" containerID="ce448fae1000038dc6291f65727caed20202326c1f6236af230b2af7ffcb78b2" Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.525783 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "60d8e2e9-244e-48b4-b99f-2606dc492482" (UID: "60d8e2e9-244e-48b4-b99f-2606dc492482"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.531562 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60d8e2e9-244e-48b4-b99f-2606dc492482" (UID: "60d8e2e9-244e-48b4-b99f-2606dc492482"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.539453 4990 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.539476 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dr24p\" (UniqueName: \"kubernetes.io/projected/60d8e2e9-244e-48b4-b99f-2606dc492482-kube-api-access-dr24p\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.539505 4990 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.539513 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.545883 4990 scope.go:117] "RemoveContainer" containerID="eac148450cb53affd7b9d7676017888bdad6962caf67e03295f4aca0531c7ef4" Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.548683 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-config" (OuterVolumeSpecName: "config") pod "60d8e2e9-244e-48b4-b99f-2606dc492482" (UID: "60d8e2e9-244e-48b4-b99f-2606dc492482"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.550003 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "60d8e2e9-244e-48b4-b99f-2606dc492482" (UID: "60d8e2e9-244e-48b4-b99f-2606dc492482"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.558048 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "60d8e2e9-244e-48b4-b99f-2606dc492482" (UID: "60d8e2e9-244e-48b4-b99f-2606dc492482"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.565908 4990 scope.go:117] "RemoveContainer" containerID="ce448fae1000038dc6291f65727caed20202326c1f6236af230b2af7ffcb78b2" Dec 05 01:35:51 crc kubenswrapper[4990]: E1205 01:35:51.566361 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce448fae1000038dc6291f65727caed20202326c1f6236af230b2af7ffcb78b2\": container with ID starting with ce448fae1000038dc6291f65727caed20202326c1f6236af230b2af7ffcb78b2 not found: ID does not exist" containerID="ce448fae1000038dc6291f65727caed20202326c1f6236af230b2af7ffcb78b2" Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.566390 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce448fae1000038dc6291f65727caed20202326c1f6236af230b2af7ffcb78b2"} err="failed to get container status \"ce448fae1000038dc6291f65727caed20202326c1f6236af230b2af7ffcb78b2\": rpc error: code = NotFound desc = could not find container \"ce448fae1000038dc6291f65727caed20202326c1f6236af230b2af7ffcb78b2\": container with ID starting with ce448fae1000038dc6291f65727caed20202326c1f6236af230b2af7ffcb78b2 not found: ID does not exist" Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.566412 4990 scope.go:117] "RemoveContainer" containerID="eac148450cb53affd7b9d7676017888bdad6962caf67e03295f4aca0531c7ef4" Dec 05 01:35:51 crc kubenswrapper[4990]: E1205 01:35:51.566769 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eac148450cb53affd7b9d7676017888bdad6962caf67e03295f4aca0531c7ef4\": container with ID starting with eac148450cb53affd7b9d7676017888bdad6962caf67e03295f4aca0531c7ef4 not found: ID does not exist" containerID="eac148450cb53affd7b9d7676017888bdad6962caf67e03295f4aca0531c7ef4" Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.566794 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eac148450cb53affd7b9d7676017888bdad6962caf67e03295f4aca0531c7ef4"} err="failed to get container status \"eac148450cb53affd7b9d7676017888bdad6962caf67e03295f4aca0531c7ef4\": rpc error: code = NotFound desc = could not find container \"eac148450cb53affd7b9d7676017888bdad6962caf67e03295f4aca0531c7ef4\": container with ID starting with eac148450cb53affd7b9d7676017888bdad6962caf67e03295f4aca0531c7ef4 not found: ID does not exist" Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.643674 4990 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.643722 4990 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-config\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.643741 4990 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/60d8e2e9-244e-48b4-b99f-2606dc492482-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.823361 4990 patch_prober.go:28] interesting pod/machine-config-daemon-zxlh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.823429 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.857087 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7fdcd7bc79-skn69"] Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.867609 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7fdcd7bc79-skn69"] Dec 05 01:35:51 crc kubenswrapper[4990]: I1205 01:35:51.943867 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60d8e2e9-244e-48b4-b99f-2606dc492482" path="/var/lib/kubelet/pods/60d8e2e9-244e-48b4-b99f-2606dc492482/volumes" Dec 05 01:35:54 crc kubenswrapper[4990]: E1205 01:35:54.223759 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940 is running failed: container process not found" containerID="edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 01:35:54 crc kubenswrapper[4990]: E1205 01:35:54.224716 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940 is running failed: container process not found" containerID="edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 01:35:54 crc kubenswrapper[4990]: E1205 01:35:54.225198 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940 is running failed: container process not found" containerID="edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 01:35:54 crc kubenswrapper[4990]: E1205 01:35:54.225281 4990 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-2j9fb" podUID="d833c1a0-9e88-4ad3-8bcc-5904d459903a" containerName="ovsdb-server" Dec 05 01:35:54 crc kubenswrapper[4990]: E1205 01:35:54.225612 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="88c86c95e39217d930a01ef924d917927e9b97fa3c53963b2fe430bae34fff01" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 01:35:54 crc kubenswrapper[4990]: E1205 01:35:54.227087 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="88c86c95e39217d930a01ef924d917927e9b97fa3c53963b2fe430bae34fff01" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 01:35:54 crc kubenswrapper[4990]: E1205 01:35:54.229263 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="88c86c95e39217d930a01ef924d917927e9b97fa3c53963b2fe430bae34fff01" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 01:35:54 crc kubenswrapper[4990]: E1205 01:35:54.229323 4990 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-2j9fb" podUID="d833c1a0-9e88-4ad3-8bcc-5904d459903a" containerName="ovs-vswitchd" Dec 05 01:35:59 crc kubenswrapper[4990]: E1205 01:35:59.224006 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940 is running failed: container process not found" containerID="edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 01:35:59 crc kubenswrapper[4990]: E1205 01:35:59.225147 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940 is running failed: container process not found" containerID="edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 01:35:59 crc kubenswrapper[4990]: E1205 01:35:59.225544 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="88c86c95e39217d930a01ef924d917927e9b97fa3c53963b2fe430bae34fff01" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 01:35:59 crc kubenswrapper[4990]: E1205 01:35:59.225738 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940 is running failed: container process not found" containerID="edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 01:35:59 crc kubenswrapper[4990]: E1205 01:35:59.225787 4990 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-2j9fb" podUID="d833c1a0-9e88-4ad3-8bcc-5904d459903a" containerName="ovsdb-server" Dec 05 01:35:59 crc kubenswrapper[4990]: E1205 01:35:59.227998 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="88c86c95e39217d930a01ef924d917927e9b97fa3c53963b2fe430bae34fff01" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 01:35:59 crc kubenswrapper[4990]: E1205 01:35:59.230199 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="88c86c95e39217d930a01ef924d917927e9b97fa3c53963b2fe430bae34fff01" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 01:35:59 crc kubenswrapper[4990]: E1205 01:35:59.230265 4990 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-2j9fb" podUID="d833c1a0-9e88-4ad3-8bcc-5904d459903a" containerName="ovs-vswitchd" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.531083 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rd4gl"] Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.533241 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4475723-8c01-483c-991d-d686c6361021" containerName="openstack-network-exporter" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.533384 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4475723-8c01-483c-991d-d686c6361021" containerName="openstack-network-exporter" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.533476 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8cbf17b-4408-40ea-81bd-c70478cf6095" containerName="proxy-httpd" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.533597 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8cbf17b-4408-40ea-81bd-c70478cf6095" containerName="proxy-httpd" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.533691 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8cbf17b-4408-40ea-81bd-c70478cf6095" containerName="sg-core" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.533775 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8cbf17b-4408-40ea-81bd-c70478cf6095" containerName="sg-core" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.533876 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="809c1920-3205-411c-a8c1-ed027b7e3b1f" containerName="rabbitmq" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.533960 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="809c1920-3205-411c-a8c1-ed027b7e3b1f" containerName="rabbitmq" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.534047 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4182c8b1-5c4d-4f6b-aeca-9492abf6069e" containerName="nova-cell1-novncproxy-novncproxy" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.534130 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="4182c8b1-5c4d-4f6b-aeca-9492abf6069e" containerName="nova-cell1-novncproxy-novncproxy" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.534674 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d269e431-18be-4f4a-a63f-fee37cf08d46" containerName="ovn-controller" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.534755 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="d269e431-18be-4f4a-a63f-fee37cf08d46" containerName="ovn-controller" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.534833 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c137d1b-6433-40ac-8036-84313eef1967" containerName="ovsdbserver-nb" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.534917 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c137d1b-6433-40ac-8036-84313eef1967" containerName="ovsdbserver-nb" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.534996 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82eb03c9-869c-447d-9b78-b4ef916b59ac" containerName="placement-api" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.535066 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="82eb03c9-869c-447d-9b78-b4ef916b59ac" containerName="placement-api" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.535138 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb029546-9d20-445a-9926-2a43c235a755" containerName="proxy-httpd" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.535216 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb029546-9d20-445a-9926-2a43c235a755" containerName="proxy-httpd" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.535300 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4489a490-bacc-498c-b0e3-d6b5cad13d34" containerName="barbican-api-log" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.535379 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="4489a490-bacc-498c-b0e3-d6b5cad13d34" containerName="barbican-api-log" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.535468 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d7643ce-5dd7-48dc-9023-6502e5b0a05a" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.535562 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d7643ce-5dd7-48dc-9023-6502e5b0a05a" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.535659 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64bbbfd0-59f8-4fb6-8761-503cdf8b9f36" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.535742 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="64bbbfd0-59f8-4fb6-8761-503cdf8b9f36" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.535824 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b5ac2be-fc48-4bde-a668-b3549462a101" containerName="nova-api-api" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.535903 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b5ac2be-fc48-4bde-a668-b3549462a101" containerName="nova-api-api" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.535986 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fecef393-81c1-4d16-af9e-3d777782dd2f" containerName="barbican-keystone-listener" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.536063 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="fecef393-81c1-4d16-af9e-3d777782dd2f" containerName="barbican-keystone-listener" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.536140 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c7241f3-92bb-4295-97d9-4284784b11f3" containerName="barbican-api" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.536211 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c7241f3-92bb-4295-97d9-4284784b11f3" containerName="barbican-api" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.536289 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7bf2416-2722-4ab6-a022-32116155fa68" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.536369 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7bf2416-2722-4ab6-a022-32116155fa68" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.536456 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00beb76a-d4d2-4cd8-bc04-e268c2397388" containerName="mysql-bootstrap" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.536557 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="00beb76a-d4d2-4cd8-bc04-e268c2397388" containerName="mysql-bootstrap" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.536637 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e" containerName="dnsmasq-dns" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.536707 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e" containerName="dnsmasq-dns" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.536792 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.536864 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.536936 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4475723-8c01-483c-991d-d686c6361021" containerName="ovsdbserver-sb" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.537019 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4475723-8c01-483c-991d-d686c6361021" containerName="ovsdbserver-sb" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.537129 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8cbf17b-4408-40ea-81bd-c70478cf6095" containerName="ceilometer-central-agent" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.537224 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8cbf17b-4408-40ea-81bd-c70478cf6095" containerName="ceilometer-central-agent" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.537317 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60d8e2e9-244e-48b4-b99f-2606dc492482" containerName="neutron-api" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.537399 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="60d8e2e9-244e-48b4-b99f-2606dc492482" containerName="neutron-api" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.537508 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00beb76a-d4d2-4cd8-bc04-e268c2397388" containerName="galera" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.537602 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="00beb76a-d4d2-4cd8-bc04-e268c2397388" containerName="galera" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.537682 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed473a7a-f068-49a3-ae4c-b57b39e33b28" containerName="rabbitmq" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.537758 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed473a7a-f068-49a3-ae4c-b57b39e33b28" containerName="rabbitmq" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.537831 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82eb03c9-869c-447d-9b78-b4ef916b59ac" containerName="placement-log" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.537899 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="82eb03c9-869c-447d-9b78-b4ef916b59ac" containerName="placement-log" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.537971 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c281c58-a95e-4669-bdfc-465759817928" containerName="galera" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.538060 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c281c58-a95e-4669-bdfc-465759817928" containerName="galera" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.538147 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ca5e656-876c-4e87-b049-5c284b211804" containerName="cinder-scheduler" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.538217 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ca5e656-876c-4e87-b049-5c284b211804" containerName="cinder-scheduler" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.538287 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10384219-030b-491b-884f-fd761eba4496" containerName="cinder-api-log" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.538356 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="10384219-030b-491b-884f-fd761eba4496" containerName="cinder-api-log" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.538441 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d" containerName="glance-httpd" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.538560 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d" containerName="glance-httpd" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.538679 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ca5e656-876c-4e87-b049-5c284b211804" containerName="probe" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.538757 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ca5e656-876c-4e87-b049-5c284b211804" containerName="probe" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.538848 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba3c2a5d-0bec-4905-8cba-d0e565643fe7" containerName="nova-metadata-log" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.538946 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba3c2a5d-0bec-4905-8cba-d0e565643fe7" containerName="nova-metadata-log" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.539022 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba876d22-269d-46e3-8a91-24c8646d1c75" containerName="barbican-worker-log" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.539100 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba876d22-269d-46e3-8a91-24c8646d1c75" containerName="barbican-worker-log" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.539191 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="426a0569-3dcd-4f28-9556-d4be5f1bdc18" containerName="nova-cell0-conductor-conductor" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.539272 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="426a0569-3dcd-4f28-9556-d4be5f1bdc18" containerName="nova-cell0-conductor-conductor" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.539364 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1847f2cb-e2fb-4dc0-8f4b-bf6e43212454" containerName="ovn-northd" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.539442 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="1847f2cb-e2fb-4dc0-8f4b-bf6e43212454" containerName="ovn-northd" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.539539 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba3c2a5d-0bec-4905-8cba-d0e565643fe7" containerName="nova-metadata-metadata" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.539623 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba3c2a5d-0bec-4905-8cba-d0e565643fe7" containerName="nova-metadata-metadata" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.539705 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b325e8cb-5fb2-4543-ad3c-c9f42a4572f0" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.539781 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="b325e8cb-5fb2-4543-ad3c-c9f42a4572f0" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.539855 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed473a7a-f068-49a3-ae4c-b57b39e33b28" containerName="setup-container" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.539926 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed473a7a-f068-49a3-ae4c-b57b39e33b28" containerName="setup-container" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.539997 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4489a490-bacc-498c-b0e3-d6b5cad13d34" containerName="barbican-api" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.540076 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="4489a490-bacc-498c-b0e3-d6b5cad13d34" containerName="barbican-api" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.540158 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e" containerName="init" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.540230 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e" containerName="init" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.540312 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="510e9e75-fc35-4bed-8e71-c6e27069f50a" containerName="kube-state-metrics" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.540383 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="510e9e75-fc35-4bed-8e71-c6e27069f50a" containerName="kube-state-metrics" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.540456 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac1cabc4-d51d-43b6-8903-f098d13c1952" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.540545 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac1cabc4-d51d-43b6-8903-f098d13c1952" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.540619 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab630416-46f3-495f-92c2-732abce81632" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.540706 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab630416-46f3-495f-92c2-732abce81632" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.540788 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab630416-46f3-495f-92c2-732abce81632" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.540869 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab630416-46f3-495f-92c2-732abce81632" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.540943 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="809c1920-3205-411c-a8c1-ed027b7e3b1f" containerName="setup-container" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.541018 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="809c1920-3205-411c-a8c1-ed027b7e3b1f" containerName="setup-container" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.541117 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d" containerName="glance-log" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.541205 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d" containerName="glance-log" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.541298 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c281c58-a95e-4669-bdfc-465759817928" containerName="mysql-bootstrap" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.541379 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c281c58-a95e-4669-bdfc-465759817928" containerName="mysql-bootstrap" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.541462 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92e80556-5f2d-44ed-b165-3211fd50ad98" containerName="openstack-network-exporter" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.541556 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="92e80556-5f2d-44ed-b165-3211fd50ad98" containerName="openstack-network-exporter" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.541642 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7bf2416-2722-4ab6-a022-32116155fa68" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.541726 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7bf2416-2722-4ab6-a022-32116155fa68" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.541798 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60d8e2e9-244e-48b4-b99f-2606dc492482" containerName="neutron-httpd" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.541872 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="60d8e2e9-244e-48b4-b99f-2606dc492482" containerName="neutron-httpd" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.541957 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b5ac2be-fc48-4bde-a668-b3549462a101" containerName="nova-api-log" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.542036 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b5ac2be-fc48-4bde-a668-b3549462a101" containerName="nova-api-log" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.542117 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c7241f3-92bb-4295-97d9-4284784b11f3" containerName="barbican-api-log" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.542189 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c7241f3-92bb-4295-97d9-4284784b11f3" containerName="barbican-api-log" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.542279 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1847f2cb-e2fb-4dc0-8f4b-bf6e43212454" containerName="openstack-network-exporter" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.542360 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="1847f2cb-e2fb-4dc0-8f4b-bf6e43212454" containerName="openstack-network-exporter" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.542442 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3" containerName="glance-log" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.545255 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3" containerName="glance-log" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.545310 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3" containerName="glance-httpd" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.545323 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3" containerName="glance-httpd" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.545343 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10384219-030b-491b-884f-fd761eba4496" containerName="cinder-api" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.545357 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="10384219-030b-491b-884f-fd761eba4496" containerName="cinder-api" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.545390 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd4b299f-9ab6-4714-b911-9b1e11708f39" containerName="memcached" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.545403 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd4b299f-9ab6-4714-b911-9b1e11708f39" containerName="memcached" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.545418 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c137d1b-6433-40ac-8036-84313eef1967" containerName="openstack-network-exporter" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.545432 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c137d1b-6433-40ac-8036-84313eef1967" containerName="openstack-network-exporter" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.545444 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8cbf17b-4408-40ea-81bd-c70478cf6095" containerName="ceilometer-notification-agent" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.545455 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8cbf17b-4408-40ea-81bd-c70478cf6095" containerName="ceilometer-notification-agent" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.545469 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23fef2f1-b3e2-4d6f-8beb-efd01386d758" containerName="nova-cell1-conductor-conductor" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.545546 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="23fef2f1-b3e2-4d6f-8beb-efd01386d758" containerName="nova-cell1-conductor-conductor" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.545575 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dc80822-8cd5-4004-abdd-160ad6dcdd72" containerName="nova-scheduler-scheduler" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.545587 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dc80822-8cd5-4004-abdd-160ad6dcdd72" containerName="nova-scheduler-scheduler" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.545608 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fecef393-81c1-4d16-af9e-3d777782dd2f" containerName="barbican-keystone-listener-log" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.545618 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="fecef393-81c1-4d16-af9e-3d777782dd2f" containerName="barbican-keystone-listener-log" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.545639 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba876d22-269d-46e3-8a91-24c8646d1c75" containerName="barbican-worker" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.545651 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba876d22-269d-46e3-8a91-24c8646d1c75" containerName="barbican-worker" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.545667 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd33dbb9-4e51-47db-8129-a93493234f7f" containerName="keystone-api" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.545678 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd33dbb9-4e51-47db-8129-a93493234f7f" containerName="keystone-api" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.545692 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac1cabc4-d51d-43b6-8903-f098d13c1952" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.545702 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac1cabc4-d51d-43b6-8903-f098d13c1952" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.545713 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb029546-9d20-445a-9926-2a43c235a755" containerName="proxy-server" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.545724 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb029546-9d20-445a-9926-2a43c235a755" containerName="proxy-server" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.545739 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64bbbfd0-59f8-4fb6-8761-503cdf8b9f36" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.545750 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="64bbbfd0-59f8-4fb6-8761-503cdf8b9f36" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546189 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546215 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab630416-46f3-495f-92c2-732abce81632" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546234 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="fecef393-81c1-4d16-af9e-3d777782dd2f" containerName="barbican-keystone-listener" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546249 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="4489a490-bacc-498c-b0e3-d6b5cad13d34" containerName="barbican-api" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546267 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="82eb03c9-869c-447d-9b78-b4ef916b59ac" containerName="placement-log" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546284 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac1cabc4-d51d-43b6-8903-f098d13c1952" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546299 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba876d22-269d-46e3-8a91-24c8646d1c75" containerName="barbican-worker-log" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546313 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4475723-8c01-483c-991d-d686c6361021" containerName="openstack-network-exporter" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546331 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c137d1b-6433-40ac-8036-84313eef1967" containerName="openstack-network-exporter" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546346 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb029546-9d20-445a-9926-2a43c235a755" containerName="proxy-server" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546359 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="10384219-030b-491b-884f-fd761eba4496" containerName="cinder-api-log" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546368 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd4b299f-9ab6-4714-b911-9b1e11708f39" containerName="memcached" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546386 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c281c58-a95e-4669-bdfc-465759817928" containerName="galera" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546402 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac1cabc4-d51d-43b6-8903-f098d13c1952" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546416 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d7643ce-5dd7-48dc-9023-6502e5b0a05a" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546435 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="4182c8b1-5c4d-4f6b-aeca-9492abf6069e" containerName="nova-cell1-novncproxy-novncproxy" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546452 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dc80822-8cd5-4004-abdd-160ad6dcdd72" containerName="nova-scheduler-scheduler" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546468 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d" containerName="glance-httpd" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546502 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3" containerName="glance-log" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546517 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="64bbbfd0-59f8-4fb6-8761-503cdf8b9f36" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546528 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab630416-46f3-495f-92c2-732abce81632" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546540 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb029546-9d20-445a-9926-2a43c235a755" containerName="proxy-httpd" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546551 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="1847f2cb-e2fb-4dc0-8f4b-bf6e43212454" containerName="openstack-network-exporter" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546564 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4e90bf4-ce4f-4b75-8a22-1ebe4c0df94e" containerName="dnsmasq-dns" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546577 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="82eb03c9-869c-447d-9b78-b4ef916b59ac" containerName="placement-api" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546594 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="510e9e75-fc35-4bed-8e71-c6e27069f50a" containerName="kube-state-metrics" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546608 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ca5e656-876c-4e87-b049-5c284b211804" containerName="probe" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546621 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b5ac2be-fc48-4bde-a668-b3549462a101" containerName="nova-api-api" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546634 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c7241f3-92bb-4295-97d9-4284784b11f3" containerName="barbican-api-log" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546652 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba3c2a5d-0bec-4905-8cba-d0e565643fe7" containerName="nova-metadata-metadata" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546663 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ca5e656-876c-4e87-b049-5c284b211804" containerName="cinder-scheduler" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546673 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="47e78fdb-b9eb-4edf-9e4c-90831d0e4fb3" containerName="glance-httpd" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546683 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="d269e431-18be-4f4a-a63f-fee37cf08d46" containerName="ovn-controller" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546698 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8cbf17b-4408-40ea-81bd-c70478cf6095" containerName="ceilometer-central-agent" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546718 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7bf2416-2722-4ab6-a022-32116155fa68" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546734 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="10384219-030b-491b-884f-fd761eba4496" containerName="cinder-api" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546750 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed473a7a-f068-49a3-ae4c-b57b39e33b28" containerName="rabbitmq" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546769 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="4489a490-bacc-498c-b0e3-d6b5cad13d34" containerName="barbican-api-log" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546780 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8cbf17b-4408-40ea-81bd-c70478cf6095" containerName="proxy-httpd" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546793 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="1847f2cb-e2fb-4dc0-8f4b-bf6e43212454" containerName="ovn-northd" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546809 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c137d1b-6433-40ac-8036-84313eef1967" containerName="ovsdbserver-nb" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546826 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="b325e8cb-5fb2-4543-ad3c-c9f42a4572f0" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546839 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="00beb76a-d4d2-4cd8-bc04-e268c2397388" containerName="galera" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546851 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="23fef2f1-b3e2-4d6f-8beb-efd01386d758" containerName="nova-cell1-conductor-conductor" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546866 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="426a0569-3dcd-4f28-9556-d4be5f1bdc18" containerName="nova-cell0-conductor-conductor" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546881 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c7241f3-92bb-4295-97d9-4284784b11f3" containerName="barbican-api" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546899 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8cbf17b-4408-40ea-81bd-c70478cf6095" containerName="ceilometer-notification-agent" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546918 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="60d8e2e9-244e-48b4-b99f-2606dc492482" containerName="neutron-api" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546935 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4475723-8c01-483c-991d-d686c6361021" containerName="ovsdbserver-sb" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546952 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8cbf17b-4408-40ea-81bd-c70478cf6095" containerName="sg-core" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546968 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7bf2416-2722-4ab6-a022-32116155fa68" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546986 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="809c1920-3205-411c-a8c1-ed027b7e3b1f" containerName="rabbitmq" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.546998 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="60d8e2e9-244e-48b4-b99f-2606dc492482" containerName="neutron-httpd" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.547014 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba3c2a5d-0bec-4905-8cba-d0e565643fe7" containerName="nova-metadata-log" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.547031 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="92e80556-5f2d-44ed-b165-3211fd50ad98" containerName="openstack-network-exporter" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.547045 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="fecef393-81c1-4d16-af9e-3d777782dd2f" containerName="barbican-keystone-listener-log" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.547061 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b5ac2be-fc48-4bde-a668-b3549462a101" containerName="nova-api-log" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.547073 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d" containerName="glance-log" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.547084 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd33dbb9-4e51-47db-8129-a93493234f7f" containerName="keystone-api" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.547101 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba876d22-269d-46e3-8a91-24c8646d1c75" containerName="barbican-worker" Dec 05 01:36:03 crc kubenswrapper[4990]: E1205 01:36:03.547348 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.547363 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.547652 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="82d23f0f-27e5-4cb6-94e8-ec55caf5dd8d" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.547682 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="64bbbfd0-59f8-4fb6-8761-503cdf8b9f36" containerName="mariadb-account-delete" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.548913 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rd4gl" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.552258 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rd4gl"] Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.646338 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjng7\" (UniqueName: \"kubernetes.io/projected/fbc42c3f-ae48-43a4-8f55-23efb52a86de-kube-api-access-pjng7\") pod \"certified-operators-rd4gl\" (UID: \"fbc42c3f-ae48-43a4-8f55-23efb52a86de\") " pod="openshift-marketplace/certified-operators-rd4gl" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.646576 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbc42c3f-ae48-43a4-8f55-23efb52a86de-catalog-content\") pod \"certified-operators-rd4gl\" (UID: \"fbc42c3f-ae48-43a4-8f55-23efb52a86de\") " pod="openshift-marketplace/certified-operators-rd4gl" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.646653 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbc42c3f-ae48-43a4-8f55-23efb52a86de-utilities\") pod \"certified-operators-rd4gl\" (UID: \"fbc42c3f-ae48-43a4-8f55-23efb52a86de\") " pod="openshift-marketplace/certified-operators-rd4gl" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.748077 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjng7\" (UniqueName: \"kubernetes.io/projected/fbc42c3f-ae48-43a4-8f55-23efb52a86de-kube-api-access-pjng7\") pod \"certified-operators-rd4gl\" (UID: \"fbc42c3f-ae48-43a4-8f55-23efb52a86de\") " pod="openshift-marketplace/certified-operators-rd4gl" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.748194 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbc42c3f-ae48-43a4-8f55-23efb52a86de-catalog-content\") pod \"certified-operators-rd4gl\" (UID: \"fbc42c3f-ae48-43a4-8f55-23efb52a86de\") " pod="openshift-marketplace/certified-operators-rd4gl" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.748230 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbc42c3f-ae48-43a4-8f55-23efb52a86de-utilities\") pod \"certified-operators-rd4gl\" (UID: \"fbc42c3f-ae48-43a4-8f55-23efb52a86de\") " pod="openshift-marketplace/certified-operators-rd4gl" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.748742 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbc42c3f-ae48-43a4-8f55-23efb52a86de-utilities\") pod \"certified-operators-rd4gl\" (UID: \"fbc42c3f-ae48-43a4-8f55-23efb52a86de\") " pod="openshift-marketplace/certified-operators-rd4gl" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.749060 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbc42c3f-ae48-43a4-8f55-23efb52a86de-catalog-content\") pod \"certified-operators-rd4gl\" (UID: \"fbc42c3f-ae48-43a4-8f55-23efb52a86de\") " pod="openshift-marketplace/certified-operators-rd4gl" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.767405 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjng7\" (UniqueName: \"kubernetes.io/projected/fbc42c3f-ae48-43a4-8f55-23efb52a86de-kube-api-access-pjng7\") pod \"certified-operators-rd4gl\" (UID: \"fbc42c3f-ae48-43a4-8f55-23efb52a86de\") " pod="openshift-marketplace/certified-operators-rd4gl" Dec 05 01:36:03 crc kubenswrapper[4990]: I1205 01:36:03.881421 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rd4gl" Dec 05 01:36:04 crc kubenswrapper[4990]: E1205 01:36:04.223961 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940 is running failed: container process not found" containerID="edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 01:36:04 crc kubenswrapper[4990]: E1205 01:36:04.224306 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940 is running failed: container process not found" containerID="edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 01:36:04 crc kubenswrapper[4990]: E1205 01:36:04.224833 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940 is running failed: container process not found" containerID="edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 01:36:04 crc kubenswrapper[4990]: E1205 01:36:04.224923 4990 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-2j9fb" podUID="d833c1a0-9e88-4ad3-8bcc-5904d459903a" containerName="ovsdb-server" Dec 05 01:36:04 crc kubenswrapper[4990]: E1205 01:36:04.224999 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="88c86c95e39217d930a01ef924d917927e9b97fa3c53963b2fe430bae34fff01" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 01:36:04 crc kubenswrapper[4990]: E1205 01:36:04.226210 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="88c86c95e39217d930a01ef924d917927e9b97fa3c53963b2fe430bae34fff01" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 01:36:04 crc kubenswrapper[4990]: E1205 01:36:04.227230 4990 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="88c86c95e39217d930a01ef924d917927e9b97fa3c53963b2fe430bae34fff01" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 01:36:04 crc kubenswrapper[4990]: E1205 01:36:04.227285 4990 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-2j9fb" podUID="d833c1a0-9e88-4ad3-8bcc-5904d459903a" containerName="ovs-vswitchd" Dec 05 01:36:04 crc kubenswrapper[4990]: I1205 01:36:04.347141 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rd4gl"] Dec 05 01:36:04 crc kubenswrapper[4990]: I1205 01:36:04.686705 4990 generic.go:334] "Generic (PLEG): container finished" podID="fbc42c3f-ae48-43a4-8f55-23efb52a86de" containerID="d6ac16ff13b21c977e5a4f476dd547480587e5a757bfe4e017f04cfd7dcc87be" exitCode=0 Dec 05 01:36:04 crc kubenswrapper[4990]: I1205 01:36:04.686786 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rd4gl" event={"ID":"fbc42c3f-ae48-43a4-8f55-23efb52a86de","Type":"ContainerDied","Data":"d6ac16ff13b21c977e5a4f476dd547480587e5a757bfe4e017f04cfd7dcc87be"} Dec 05 01:36:04 crc kubenswrapper[4990]: I1205 01:36:04.686840 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rd4gl" event={"ID":"fbc42c3f-ae48-43a4-8f55-23efb52a86de","Type":"ContainerStarted","Data":"4e770847aebe0f232ad9aa2217f2a672b5a1877bc624dd1a7b073b6cad8bd7f2"} Dec 05 01:36:05 crc kubenswrapper[4990]: I1205 01:36:05.698774 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-2j9fb_d833c1a0-9e88-4ad3-8bcc-5904d459903a/ovs-vswitchd/0.log" Dec 05 01:36:05 crc kubenswrapper[4990]: I1205 01:36:05.699932 4990 generic.go:334] "Generic (PLEG): container finished" podID="d833c1a0-9e88-4ad3-8bcc-5904d459903a" containerID="88c86c95e39217d930a01ef924d917927e9b97fa3c53963b2fe430bae34fff01" exitCode=137 Dec 05 01:36:05 crc kubenswrapper[4990]: I1205 01:36:05.699995 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2j9fb" event={"ID":"d833c1a0-9e88-4ad3-8bcc-5904d459903a","Type":"ContainerDied","Data":"88c86c95e39217d930a01ef924d917927e9b97fa3c53963b2fe430bae34fff01"} Dec 05 01:36:05 crc kubenswrapper[4990]: I1205 01:36:05.705574 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rd4gl" event={"ID":"fbc42c3f-ae48-43a4-8f55-23efb52a86de","Type":"ContainerStarted","Data":"4577ae79bbb40a01f9e4dcd0146adad458b36bd44f37bc3931db20bb005327de"} Dec 05 01:36:05 crc kubenswrapper[4990]: I1205 01:36:05.986975 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-2j9fb_d833c1a0-9e88-4ad3-8bcc-5904d459903a/ovs-vswitchd/0.log" Dec 05 01:36:05 crc kubenswrapper[4990]: I1205 01:36:05.988084 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-2j9fb" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.157217 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.179508 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pd6rq\" (UniqueName: \"kubernetes.io/projected/d833c1a0-9e88-4ad3-8bcc-5904d459903a-kube-api-access-pd6rq\") pod \"d833c1a0-9e88-4ad3-8bcc-5904d459903a\" (UID: \"d833c1a0-9e88-4ad3-8bcc-5904d459903a\") " Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.179556 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d833c1a0-9e88-4ad3-8bcc-5904d459903a-etc-ovs\") pod \"d833c1a0-9e88-4ad3-8bcc-5904d459903a\" (UID: \"d833c1a0-9e88-4ad3-8bcc-5904d459903a\") " Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.179637 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d833c1a0-9e88-4ad3-8bcc-5904d459903a-var-run\") pod \"d833c1a0-9e88-4ad3-8bcc-5904d459903a\" (UID: \"d833c1a0-9e88-4ad3-8bcc-5904d459903a\") " Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.179672 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d833c1a0-9e88-4ad3-8bcc-5904d459903a-scripts\") pod \"d833c1a0-9e88-4ad3-8bcc-5904d459903a\" (UID: \"d833c1a0-9e88-4ad3-8bcc-5904d459903a\") " Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.179699 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d833c1a0-9e88-4ad3-8bcc-5904d459903a-var-log\") pod \"d833c1a0-9e88-4ad3-8bcc-5904d459903a\" (UID: \"d833c1a0-9e88-4ad3-8bcc-5904d459903a\") " Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.179770 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d833c1a0-9e88-4ad3-8bcc-5904d459903a-var-lib\") pod \"d833c1a0-9e88-4ad3-8bcc-5904d459903a\" (UID: \"d833c1a0-9e88-4ad3-8bcc-5904d459903a\") " Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.180179 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d833c1a0-9e88-4ad3-8bcc-5904d459903a-var-lib" (OuterVolumeSpecName: "var-lib") pod "d833c1a0-9e88-4ad3-8bcc-5904d459903a" (UID: "d833c1a0-9e88-4ad3-8bcc-5904d459903a"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.180270 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d833c1a0-9e88-4ad3-8bcc-5904d459903a-var-run" (OuterVolumeSpecName: "var-run") pod "d833c1a0-9e88-4ad3-8bcc-5904d459903a" (UID: "d833c1a0-9e88-4ad3-8bcc-5904d459903a"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.180699 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d833c1a0-9e88-4ad3-8bcc-5904d459903a-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "d833c1a0-9e88-4ad3-8bcc-5904d459903a" (UID: "d833c1a0-9e88-4ad3-8bcc-5904d459903a"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.180757 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d833c1a0-9e88-4ad3-8bcc-5904d459903a-var-log" (OuterVolumeSpecName: "var-log") pod "d833c1a0-9e88-4ad3-8bcc-5904d459903a" (UID: "d833c1a0-9e88-4ad3-8bcc-5904d459903a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.181716 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d833c1a0-9e88-4ad3-8bcc-5904d459903a-scripts" (OuterVolumeSpecName: "scripts") pod "d833c1a0-9e88-4ad3-8bcc-5904d459903a" (UID: "d833c1a0-9e88-4ad3-8bcc-5904d459903a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.192456 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d833c1a0-9e88-4ad3-8bcc-5904d459903a-kube-api-access-pd6rq" (OuterVolumeSpecName: "kube-api-access-pd6rq") pod "d833c1a0-9e88-4ad3-8bcc-5904d459903a" (UID: "d833c1a0-9e88-4ad3-8bcc-5904d459903a"). InnerVolumeSpecName "kube-api-access-pd6rq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.280796 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bj89\" (UniqueName: \"kubernetes.io/projected/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-kube-api-access-8bj89\") pod \"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3\" (UID: \"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3\") " Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.280906 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3\" (UID: \"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3\") " Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.280984 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-etc-swift\") pod \"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3\" (UID: \"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3\") " Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.281080 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-lock\") pod \"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3\" (UID: \"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3\") " Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.281793 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-cache\") pod \"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3\" (UID: \"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3\") " Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.281849 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-lock" (OuterVolumeSpecName: "lock") pod "c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" (UID: "c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.282269 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-cache" (OuterVolumeSpecName: "cache") pod "c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" (UID: "c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.282546 4990 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d833c1a0-9e88-4ad3-8bcc-5904d459903a-var-run\") on node \"crc\" DevicePath \"\"" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.282568 4990 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d833c1a0-9e88-4ad3-8bcc-5904d459903a-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.282584 4990 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d833c1a0-9e88-4ad3-8bcc-5904d459903a-var-log\") on node \"crc\" DevicePath \"\"" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.282598 4990 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-lock\") on node \"crc\" DevicePath \"\"" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.282609 4990 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d833c1a0-9e88-4ad3-8bcc-5904d459903a-var-lib\") on node \"crc\" DevicePath \"\"" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.282622 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pd6rq\" (UniqueName: \"kubernetes.io/projected/d833c1a0-9e88-4ad3-8bcc-5904d459903a-kube-api-access-pd6rq\") on node \"crc\" DevicePath \"\"" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.282636 4990 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d833c1a0-9e88-4ad3-8bcc-5904d459903a-etc-ovs\") on node \"crc\" DevicePath \"\"" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.282647 4990 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-cache\") on node \"crc\" DevicePath \"\"" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.283944 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-kube-api-access-8bj89" (OuterVolumeSpecName: "kube-api-access-8bj89") pod "c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" (UID: "c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3"). InnerVolumeSpecName "kube-api-access-8bj89". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.284700 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" (UID: "c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.285292 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "swift") pod "c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" (UID: "c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.383416 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bj89\" (UniqueName: \"kubernetes.io/projected/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-kube-api-access-8bj89\") on node \"crc\" DevicePath \"\"" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.383474 4990 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.383504 4990 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.398590 4990 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.484358 4990 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.720552 4990 generic.go:334] "Generic (PLEG): container finished" podID="fbc42c3f-ae48-43a4-8f55-23efb52a86de" containerID="4577ae79bbb40a01f9e4dcd0146adad458b36bd44f37bc3931db20bb005327de" exitCode=0 Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.720622 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rd4gl" event={"ID":"fbc42c3f-ae48-43a4-8f55-23efb52a86de","Type":"ContainerDied","Data":"4577ae79bbb40a01f9e4dcd0146adad458b36bd44f37bc3931db20bb005327de"} Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.722756 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-2j9fb_d833c1a0-9e88-4ad3-8bcc-5904d459903a/ovs-vswitchd/0.log" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.723948 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2j9fb" event={"ID":"d833c1a0-9e88-4ad3-8bcc-5904d459903a","Type":"ContainerDied","Data":"794a157b8e4c29b8bb0adfe2d37dcdc8ecc2ee90d88651978b6c88fe0a5f4f3a"} Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.723988 4990 scope.go:117] "RemoveContainer" containerID="88c86c95e39217d930a01ef924d917927e9b97fa3c53963b2fe430bae34fff01" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.723950 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-2j9fb" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.741873 4990 generic.go:334] "Generic (PLEG): container finished" podID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerID="af96dd281dbcdb124501016399a0267209fa29bd5d56b5b3ffc4e188d564c0e5" exitCode=137 Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.741949 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3","Type":"ContainerDied","Data":"af96dd281dbcdb124501016399a0267209fa29bd5d56b5b3ffc4e188d564c0e5"} Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.741974 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.742006 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3","Type":"ContainerDied","Data":"614b69fd6049435f72e6968763e104510365cf2a45be10d51544f05bcceefba6"} Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.761648 4990 scope.go:117] "RemoveContainer" containerID="edd0a4cc57bf1287e15201829fb8edca93ca1f617ade86f92feb2b058c90e940" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.792441 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-2j9fb"] Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.807319 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-2j9fb"] Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.809075 4990 scope.go:117] "RemoveContainer" containerID="be6140227381e6af78101bb3622a3525501ee2117419771e4044f5d64097caae" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.815869 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.822694 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.838784 4990 scope.go:117] "RemoveContainer" containerID="af96dd281dbcdb124501016399a0267209fa29bd5d56b5b3ffc4e188d564c0e5" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.863248 4990 scope.go:117] "RemoveContainer" containerID="4ae06d118eb8ea46cbfca6d00f42445a070e48fa9ccd0ad829f106c51d8ef194" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.881415 4990 scope.go:117] "RemoveContainer" containerID="6de70efde02107c9fbb1a95e593fa59d651e0b070d0fd80a09b5347e7584c5bb" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.899752 4990 scope.go:117] "RemoveContainer" containerID="0680ec023c6468307885df08d2ac4f1cac0cb88e5ec79fee584fd1c1afbf5efa" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.915633 4990 scope.go:117] "RemoveContainer" containerID="f2ff93fb8c7c454e35b2bf406258dfcfba4639cb5f8814a7a599d53112389d47" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.930416 4990 scope.go:117] "RemoveContainer" containerID="26362245e507d8670eba99034d134400bd3ec982033714c91eb70c99f5853337" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.954902 4990 scope.go:117] "RemoveContainer" containerID="4fd758b87cec247eba81ade50034153cdcbbc7f7c87bfcd602d23cb9ea1e04cd" Dec 05 01:36:06 crc kubenswrapper[4990]: I1205 01:36:06.982615 4990 scope.go:117] "RemoveContainer" containerID="78b96b7b027c73093d95b4e9f8ab42d62565ca6239843604ae8aaba9db2a71e8" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.002228 4990 scope.go:117] "RemoveContainer" containerID="c71d8495c570c44e8d0d8fce230714f4e17b680d377288202cbad2a9a0e42974" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.026339 4990 scope.go:117] "RemoveContainer" containerID="9fa775c9f947165ed3ddfe5e6cdc672dfd05e2ff806586425cc43ed13294c90e" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.055105 4990 scope.go:117] "RemoveContainer" containerID="d31ccce1b91f8a9a540e5a730943d1aae099c0fe3faf11f048b29a08c7624223" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.086129 4990 scope.go:117] "RemoveContainer" containerID="9c1833b8d187a25564fdfc86973fed86449ef89dbe4fd79facc4c862e3f434b1" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.115082 4990 scope.go:117] "RemoveContainer" containerID="ca0beff3bc28af2d1932bdaaacb1378a2e2c8bdb17dbe086bdacfa5a6b716268" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.148248 4990 scope.go:117] "RemoveContainer" containerID="d560bf232fe1bd37741c0fe18914a2b867d18453e94b3688a301eb8bd39760b1" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.200987 4990 scope.go:117] "RemoveContainer" containerID="8eae0c84a22f2ce5fdda06bbd55b6ab37c6416cffdda260c5a8481964de6b976" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.255735 4990 scope.go:117] "RemoveContainer" containerID="af96dd281dbcdb124501016399a0267209fa29bd5d56b5b3ffc4e188d564c0e5" Dec 05 01:36:07 crc kubenswrapper[4990]: E1205 01:36:07.256723 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af96dd281dbcdb124501016399a0267209fa29bd5d56b5b3ffc4e188d564c0e5\": container with ID starting with af96dd281dbcdb124501016399a0267209fa29bd5d56b5b3ffc4e188d564c0e5 not found: ID does not exist" containerID="af96dd281dbcdb124501016399a0267209fa29bd5d56b5b3ffc4e188d564c0e5" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.256792 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af96dd281dbcdb124501016399a0267209fa29bd5d56b5b3ffc4e188d564c0e5"} err="failed to get container status \"af96dd281dbcdb124501016399a0267209fa29bd5d56b5b3ffc4e188d564c0e5\": rpc error: code = NotFound desc = could not find container \"af96dd281dbcdb124501016399a0267209fa29bd5d56b5b3ffc4e188d564c0e5\": container with ID starting with af96dd281dbcdb124501016399a0267209fa29bd5d56b5b3ffc4e188d564c0e5 not found: ID does not exist" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.256835 4990 scope.go:117] "RemoveContainer" containerID="4ae06d118eb8ea46cbfca6d00f42445a070e48fa9ccd0ad829f106c51d8ef194" Dec 05 01:36:07 crc kubenswrapper[4990]: E1205 01:36:07.257974 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ae06d118eb8ea46cbfca6d00f42445a070e48fa9ccd0ad829f106c51d8ef194\": container with ID starting with 4ae06d118eb8ea46cbfca6d00f42445a070e48fa9ccd0ad829f106c51d8ef194 not found: ID does not exist" containerID="4ae06d118eb8ea46cbfca6d00f42445a070e48fa9ccd0ad829f106c51d8ef194" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.258043 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ae06d118eb8ea46cbfca6d00f42445a070e48fa9ccd0ad829f106c51d8ef194"} err="failed to get container status \"4ae06d118eb8ea46cbfca6d00f42445a070e48fa9ccd0ad829f106c51d8ef194\": rpc error: code = NotFound desc = could not find container \"4ae06d118eb8ea46cbfca6d00f42445a070e48fa9ccd0ad829f106c51d8ef194\": container with ID starting with 4ae06d118eb8ea46cbfca6d00f42445a070e48fa9ccd0ad829f106c51d8ef194 not found: ID does not exist" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.258082 4990 scope.go:117] "RemoveContainer" containerID="6de70efde02107c9fbb1a95e593fa59d651e0b070d0fd80a09b5347e7584c5bb" Dec 05 01:36:07 crc kubenswrapper[4990]: E1205 01:36:07.258854 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6de70efde02107c9fbb1a95e593fa59d651e0b070d0fd80a09b5347e7584c5bb\": container with ID starting with 6de70efde02107c9fbb1a95e593fa59d651e0b070d0fd80a09b5347e7584c5bb not found: ID does not exist" containerID="6de70efde02107c9fbb1a95e593fa59d651e0b070d0fd80a09b5347e7584c5bb" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.258934 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6de70efde02107c9fbb1a95e593fa59d651e0b070d0fd80a09b5347e7584c5bb"} err="failed to get container status \"6de70efde02107c9fbb1a95e593fa59d651e0b070d0fd80a09b5347e7584c5bb\": rpc error: code = NotFound desc = could not find container \"6de70efde02107c9fbb1a95e593fa59d651e0b070d0fd80a09b5347e7584c5bb\": container with ID starting with 6de70efde02107c9fbb1a95e593fa59d651e0b070d0fd80a09b5347e7584c5bb not found: ID does not exist" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.258998 4990 scope.go:117] "RemoveContainer" containerID="0680ec023c6468307885df08d2ac4f1cac0cb88e5ec79fee584fd1c1afbf5efa" Dec 05 01:36:07 crc kubenswrapper[4990]: E1205 01:36:07.259771 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0680ec023c6468307885df08d2ac4f1cac0cb88e5ec79fee584fd1c1afbf5efa\": container with ID starting with 0680ec023c6468307885df08d2ac4f1cac0cb88e5ec79fee584fd1c1afbf5efa not found: ID does not exist" containerID="0680ec023c6468307885df08d2ac4f1cac0cb88e5ec79fee584fd1c1afbf5efa" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.259837 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0680ec023c6468307885df08d2ac4f1cac0cb88e5ec79fee584fd1c1afbf5efa"} err="failed to get container status \"0680ec023c6468307885df08d2ac4f1cac0cb88e5ec79fee584fd1c1afbf5efa\": rpc error: code = NotFound desc = could not find container \"0680ec023c6468307885df08d2ac4f1cac0cb88e5ec79fee584fd1c1afbf5efa\": container with ID starting with 0680ec023c6468307885df08d2ac4f1cac0cb88e5ec79fee584fd1c1afbf5efa not found: ID does not exist" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.259878 4990 scope.go:117] "RemoveContainer" containerID="f2ff93fb8c7c454e35b2bf406258dfcfba4639cb5f8814a7a599d53112389d47" Dec 05 01:36:07 crc kubenswrapper[4990]: E1205 01:36:07.260359 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2ff93fb8c7c454e35b2bf406258dfcfba4639cb5f8814a7a599d53112389d47\": container with ID starting with f2ff93fb8c7c454e35b2bf406258dfcfba4639cb5f8814a7a599d53112389d47 not found: ID does not exist" containerID="f2ff93fb8c7c454e35b2bf406258dfcfba4639cb5f8814a7a599d53112389d47" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.260398 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2ff93fb8c7c454e35b2bf406258dfcfba4639cb5f8814a7a599d53112389d47"} err="failed to get container status \"f2ff93fb8c7c454e35b2bf406258dfcfba4639cb5f8814a7a599d53112389d47\": rpc error: code = NotFound desc = could not find container \"f2ff93fb8c7c454e35b2bf406258dfcfba4639cb5f8814a7a599d53112389d47\": container with ID starting with f2ff93fb8c7c454e35b2bf406258dfcfba4639cb5f8814a7a599d53112389d47 not found: ID does not exist" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.260423 4990 scope.go:117] "RemoveContainer" containerID="26362245e507d8670eba99034d134400bd3ec982033714c91eb70c99f5853337" Dec 05 01:36:07 crc kubenswrapper[4990]: E1205 01:36:07.261344 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26362245e507d8670eba99034d134400bd3ec982033714c91eb70c99f5853337\": container with ID starting with 26362245e507d8670eba99034d134400bd3ec982033714c91eb70c99f5853337 not found: ID does not exist" containerID="26362245e507d8670eba99034d134400bd3ec982033714c91eb70c99f5853337" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.261431 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26362245e507d8670eba99034d134400bd3ec982033714c91eb70c99f5853337"} err="failed to get container status \"26362245e507d8670eba99034d134400bd3ec982033714c91eb70c99f5853337\": rpc error: code = NotFound desc = could not find container \"26362245e507d8670eba99034d134400bd3ec982033714c91eb70c99f5853337\": container with ID starting with 26362245e507d8670eba99034d134400bd3ec982033714c91eb70c99f5853337 not found: ID does not exist" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.261470 4990 scope.go:117] "RemoveContainer" containerID="4fd758b87cec247eba81ade50034153cdcbbc7f7c87bfcd602d23cb9ea1e04cd" Dec 05 01:36:07 crc kubenswrapper[4990]: E1205 01:36:07.262043 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fd758b87cec247eba81ade50034153cdcbbc7f7c87bfcd602d23cb9ea1e04cd\": container with ID starting with 4fd758b87cec247eba81ade50034153cdcbbc7f7c87bfcd602d23cb9ea1e04cd not found: ID does not exist" containerID="4fd758b87cec247eba81ade50034153cdcbbc7f7c87bfcd602d23cb9ea1e04cd" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.262081 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fd758b87cec247eba81ade50034153cdcbbc7f7c87bfcd602d23cb9ea1e04cd"} err="failed to get container status \"4fd758b87cec247eba81ade50034153cdcbbc7f7c87bfcd602d23cb9ea1e04cd\": rpc error: code = NotFound desc = could not find container \"4fd758b87cec247eba81ade50034153cdcbbc7f7c87bfcd602d23cb9ea1e04cd\": container with ID starting with 4fd758b87cec247eba81ade50034153cdcbbc7f7c87bfcd602d23cb9ea1e04cd not found: ID does not exist" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.262105 4990 scope.go:117] "RemoveContainer" containerID="78b96b7b027c73093d95b4e9f8ab42d62565ca6239843604ae8aaba9db2a71e8" Dec 05 01:36:07 crc kubenswrapper[4990]: E1205 01:36:07.262605 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78b96b7b027c73093d95b4e9f8ab42d62565ca6239843604ae8aaba9db2a71e8\": container with ID starting with 78b96b7b027c73093d95b4e9f8ab42d62565ca6239843604ae8aaba9db2a71e8 not found: ID does not exist" containerID="78b96b7b027c73093d95b4e9f8ab42d62565ca6239843604ae8aaba9db2a71e8" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.262653 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78b96b7b027c73093d95b4e9f8ab42d62565ca6239843604ae8aaba9db2a71e8"} err="failed to get container status \"78b96b7b027c73093d95b4e9f8ab42d62565ca6239843604ae8aaba9db2a71e8\": rpc error: code = NotFound desc = could not find container \"78b96b7b027c73093d95b4e9f8ab42d62565ca6239843604ae8aaba9db2a71e8\": container with ID starting with 78b96b7b027c73093d95b4e9f8ab42d62565ca6239843604ae8aaba9db2a71e8 not found: ID does not exist" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.262685 4990 scope.go:117] "RemoveContainer" containerID="c71d8495c570c44e8d0d8fce230714f4e17b680d377288202cbad2a9a0e42974" Dec 05 01:36:07 crc kubenswrapper[4990]: E1205 01:36:07.263426 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c71d8495c570c44e8d0d8fce230714f4e17b680d377288202cbad2a9a0e42974\": container with ID starting with c71d8495c570c44e8d0d8fce230714f4e17b680d377288202cbad2a9a0e42974 not found: ID does not exist" containerID="c71d8495c570c44e8d0d8fce230714f4e17b680d377288202cbad2a9a0e42974" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.263469 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c71d8495c570c44e8d0d8fce230714f4e17b680d377288202cbad2a9a0e42974"} err="failed to get container status \"c71d8495c570c44e8d0d8fce230714f4e17b680d377288202cbad2a9a0e42974\": rpc error: code = NotFound desc = could not find container \"c71d8495c570c44e8d0d8fce230714f4e17b680d377288202cbad2a9a0e42974\": container with ID starting with c71d8495c570c44e8d0d8fce230714f4e17b680d377288202cbad2a9a0e42974 not found: ID does not exist" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.263763 4990 scope.go:117] "RemoveContainer" containerID="9fa775c9f947165ed3ddfe5e6cdc672dfd05e2ff806586425cc43ed13294c90e" Dec 05 01:36:07 crc kubenswrapper[4990]: E1205 01:36:07.265085 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fa775c9f947165ed3ddfe5e6cdc672dfd05e2ff806586425cc43ed13294c90e\": container with ID starting with 9fa775c9f947165ed3ddfe5e6cdc672dfd05e2ff806586425cc43ed13294c90e not found: ID does not exist" containerID="9fa775c9f947165ed3ddfe5e6cdc672dfd05e2ff806586425cc43ed13294c90e" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.265163 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fa775c9f947165ed3ddfe5e6cdc672dfd05e2ff806586425cc43ed13294c90e"} err="failed to get container status \"9fa775c9f947165ed3ddfe5e6cdc672dfd05e2ff806586425cc43ed13294c90e\": rpc error: code = NotFound desc = could not find container \"9fa775c9f947165ed3ddfe5e6cdc672dfd05e2ff806586425cc43ed13294c90e\": container with ID starting with 9fa775c9f947165ed3ddfe5e6cdc672dfd05e2ff806586425cc43ed13294c90e not found: ID does not exist" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.265208 4990 scope.go:117] "RemoveContainer" containerID="d31ccce1b91f8a9a540e5a730943d1aae099c0fe3faf11f048b29a08c7624223" Dec 05 01:36:07 crc kubenswrapper[4990]: E1205 01:36:07.265903 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d31ccce1b91f8a9a540e5a730943d1aae099c0fe3faf11f048b29a08c7624223\": container with ID starting with d31ccce1b91f8a9a540e5a730943d1aae099c0fe3faf11f048b29a08c7624223 not found: ID does not exist" containerID="d31ccce1b91f8a9a540e5a730943d1aae099c0fe3faf11f048b29a08c7624223" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.265942 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d31ccce1b91f8a9a540e5a730943d1aae099c0fe3faf11f048b29a08c7624223"} err="failed to get container status \"d31ccce1b91f8a9a540e5a730943d1aae099c0fe3faf11f048b29a08c7624223\": rpc error: code = NotFound desc = could not find container \"d31ccce1b91f8a9a540e5a730943d1aae099c0fe3faf11f048b29a08c7624223\": container with ID starting with d31ccce1b91f8a9a540e5a730943d1aae099c0fe3faf11f048b29a08c7624223 not found: ID does not exist" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.265976 4990 scope.go:117] "RemoveContainer" containerID="9c1833b8d187a25564fdfc86973fed86449ef89dbe4fd79facc4c862e3f434b1" Dec 05 01:36:07 crc kubenswrapper[4990]: E1205 01:36:07.266537 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c1833b8d187a25564fdfc86973fed86449ef89dbe4fd79facc4c862e3f434b1\": container with ID starting with 9c1833b8d187a25564fdfc86973fed86449ef89dbe4fd79facc4c862e3f434b1 not found: ID does not exist" containerID="9c1833b8d187a25564fdfc86973fed86449ef89dbe4fd79facc4c862e3f434b1" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.266694 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c1833b8d187a25564fdfc86973fed86449ef89dbe4fd79facc4c862e3f434b1"} err="failed to get container status \"9c1833b8d187a25564fdfc86973fed86449ef89dbe4fd79facc4c862e3f434b1\": rpc error: code = NotFound desc = could not find container \"9c1833b8d187a25564fdfc86973fed86449ef89dbe4fd79facc4c862e3f434b1\": container with ID starting with 9c1833b8d187a25564fdfc86973fed86449ef89dbe4fd79facc4c862e3f434b1 not found: ID does not exist" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.266775 4990 scope.go:117] "RemoveContainer" containerID="ca0beff3bc28af2d1932bdaaacb1378a2e2c8bdb17dbe086bdacfa5a6b716268" Dec 05 01:36:07 crc kubenswrapper[4990]: E1205 01:36:07.267584 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca0beff3bc28af2d1932bdaaacb1378a2e2c8bdb17dbe086bdacfa5a6b716268\": container with ID starting with ca0beff3bc28af2d1932bdaaacb1378a2e2c8bdb17dbe086bdacfa5a6b716268 not found: ID does not exist" containerID="ca0beff3bc28af2d1932bdaaacb1378a2e2c8bdb17dbe086bdacfa5a6b716268" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.267638 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca0beff3bc28af2d1932bdaaacb1378a2e2c8bdb17dbe086bdacfa5a6b716268"} err="failed to get container status \"ca0beff3bc28af2d1932bdaaacb1378a2e2c8bdb17dbe086bdacfa5a6b716268\": rpc error: code = NotFound desc = could not find container \"ca0beff3bc28af2d1932bdaaacb1378a2e2c8bdb17dbe086bdacfa5a6b716268\": container with ID starting with ca0beff3bc28af2d1932bdaaacb1378a2e2c8bdb17dbe086bdacfa5a6b716268 not found: ID does not exist" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.267672 4990 scope.go:117] "RemoveContainer" containerID="d560bf232fe1bd37741c0fe18914a2b867d18453e94b3688a301eb8bd39760b1" Dec 05 01:36:07 crc kubenswrapper[4990]: E1205 01:36:07.268068 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d560bf232fe1bd37741c0fe18914a2b867d18453e94b3688a301eb8bd39760b1\": container with ID starting with d560bf232fe1bd37741c0fe18914a2b867d18453e94b3688a301eb8bd39760b1 not found: ID does not exist" containerID="d560bf232fe1bd37741c0fe18914a2b867d18453e94b3688a301eb8bd39760b1" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.268118 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d560bf232fe1bd37741c0fe18914a2b867d18453e94b3688a301eb8bd39760b1"} err="failed to get container status \"d560bf232fe1bd37741c0fe18914a2b867d18453e94b3688a301eb8bd39760b1\": rpc error: code = NotFound desc = could not find container \"d560bf232fe1bd37741c0fe18914a2b867d18453e94b3688a301eb8bd39760b1\": container with ID starting with d560bf232fe1bd37741c0fe18914a2b867d18453e94b3688a301eb8bd39760b1 not found: ID does not exist" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.268147 4990 scope.go:117] "RemoveContainer" containerID="8eae0c84a22f2ce5fdda06bbd55b6ab37c6416cffdda260c5a8481964de6b976" Dec 05 01:36:07 crc kubenswrapper[4990]: E1205 01:36:07.269451 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8eae0c84a22f2ce5fdda06bbd55b6ab37c6416cffdda260c5a8481964de6b976\": container with ID starting with 8eae0c84a22f2ce5fdda06bbd55b6ab37c6416cffdda260c5a8481964de6b976 not found: ID does not exist" containerID="8eae0c84a22f2ce5fdda06bbd55b6ab37c6416cffdda260c5a8481964de6b976" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.269534 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8eae0c84a22f2ce5fdda06bbd55b6ab37c6416cffdda260c5a8481964de6b976"} err="failed to get container status \"8eae0c84a22f2ce5fdda06bbd55b6ab37c6416cffdda260c5a8481964de6b976\": rpc error: code = NotFound desc = could not find container \"8eae0c84a22f2ce5fdda06bbd55b6ab37c6416cffdda260c5a8481964de6b976\": container with ID starting with 8eae0c84a22f2ce5fdda06bbd55b6ab37c6416cffdda260c5a8481964de6b976 not found: ID does not exist" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.762915 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rd4gl" event={"ID":"fbc42c3f-ae48-43a4-8f55-23efb52a86de","Type":"ContainerStarted","Data":"774b83ca936a1fd6812b50813ed422f8eef60bdcc8cdf5b8e526bed0c856a469"} Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.947263 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" path="/var/lib/kubelet/pods/c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3/volumes" Dec 05 01:36:07 crc kubenswrapper[4990]: I1205 01:36:07.952033 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d833c1a0-9e88-4ad3-8bcc-5904d459903a" path="/var/lib/kubelet/pods/d833c1a0-9e88-4ad3-8bcc-5904d459903a/volumes" Dec 05 01:36:10 crc kubenswrapper[4990]: I1205 01:36:10.808555 4990 generic.go:334] "Generic (PLEG): container finished" podID="e94da38c-b2d3-4ddb-b032-a6e5bfa62145" containerID="cc5093cdab379f16035c70128d1dfeb6ff51bf6e5e36c4ec7ac5f5e887cecd99" exitCode=137 Dec 05 01:36:10 crc kubenswrapper[4990]: I1205 01:36:10.808944 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7599ccc789-q6ldt" event={"ID":"e94da38c-b2d3-4ddb-b032-a6e5bfa62145","Type":"ContainerDied","Data":"cc5093cdab379f16035c70128d1dfeb6ff51bf6e5e36c4ec7ac5f5e887cecd99"} Dec 05 01:36:10 crc kubenswrapper[4990]: I1205 01:36:10.813337 4990 generic.go:334] "Generic (PLEG): container finished" podID="0cfb17a8-ecc2-4fa8-85e0-439d19b01b97" containerID="8bc4083818a23adcc4860fdba5de05f1178e309898e0aed33be3be52454aeccc" exitCode=137 Dec 05 01:36:10 crc kubenswrapper[4990]: I1205 01:36:10.813376 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57775f7b86-mwzx9" event={"ID":"0cfb17a8-ecc2-4fa8-85e0-439d19b01b97","Type":"ContainerDied","Data":"8bc4083818a23adcc4860fdba5de05f1178e309898e0aed33be3be52454aeccc"} Dec 05 01:36:10 crc kubenswrapper[4990]: I1205 01:36:10.918018 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-57775f7b86-mwzx9" Dec 05 01:36:10 crc kubenswrapper[4990]: I1205 01:36:10.959598 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rd4gl" podStartSLOduration=5.447004164 podStartE2EDuration="7.959578075s" podCreationTimestamp="2025-12-05 01:36:03 +0000 UTC" firstStartedPulling="2025-12-05 01:36:04.688974359 +0000 UTC m=+1663.065189730" lastFinishedPulling="2025-12-05 01:36:07.20154825 +0000 UTC m=+1665.577763641" observedRunningTime="2025-12-05 01:36:07.78992525 +0000 UTC m=+1666.166140651" watchObservedRunningTime="2025-12-05 01:36:10.959578075 +0000 UTC m=+1669.335793436" Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.064348 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cfb17a8-ecc2-4fa8-85e0-439d19b01b97-config-data\") pod \"0cfb17a8-ecc2-4fa8-85e0-439d19b01b97\" (UID: \"0cfb17a8-ecc2-4fa8-85e0-439d19b01b97\") " Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.064585 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lq5bp\" (UniqueName: \"kubernetes.io/projected/0cfb17a8-ecc2-4fa8-85e0-439d19b01b97-kube-api-access-lq5bp\") pod \"0cfb17a8-ecc2-4fa8-85e0-439d19b01b97\" (UID: \"0cfb17a8-ecc2-4fa8-85e0-439d19b01b97\") " Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.064663 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cfb17a8-ecc2-4fa8-85e0-439d19b01b97-combined-ca-bundle\") pod \"0cfb17a8-ecc2-4fa8-85e0-439d19b01b97\" (UID: \"0cfb17a8-ecc2-4fa8-85e0-439d19b01b97\") " Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.064682 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0cfb17a8-ecc2-4fa8-85e0-439d19b01b97-config-data-custom\") pod \"0cfb17a8-ecc2-4fa8-85e0-439d19b01b97\" (UID: \"0cfb17a8-ecc2-4fa8-85e0-439d19b01b97\") " Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.064700 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cfb17a8-ecc2-4fa8-85e0-439d19b01b97-logs\") pod \"0cfb17a8-ecc2-4fa8-85e0-439d19b01b97\" (UID: \"0cfb17a8-ecc2-4fa8-85e0-439d19b01b97\") " Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.065158 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cfb17a8-ecc2-4fa8-85e0-439d19b01b97-logs" (OuterVolumeSpecName: "logs") pod "0cfb17a8-ecc2-4fa8-85e0-439d19b01b97" (UID: "0cfb17a8-ecc2-4fa8-85e0-439d19b01b97"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.069028 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cfb17a8-ecc2-4fa8-85e0-439d19b01b97-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0cfb17a8-ecc2-4fa8-85e0-439d19b01b97" (UID: "0cfb17a8-ecc2-4fa8-85e0-439d19b01b97"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.069556 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cfb17a8-ecc2-4fa8-85e0-439d19b01b97-kube-api-access-lq5bp" (OuterVolumeSpecName: "kube-api-access-lq5bp") pod "0cfb17a8-ecc2-4fa8-85e0-439d19b01b97" (UID: "0cfb17a8-ecc2-4fa8-85e0-439d19b01b97"). InnerVolumeSpecName "kube-api-access-lq5bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.082771 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7599ccc789-q6ldt" Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.088398 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cfb17a8-ecc2-4fa8-85e0-439d19b01b97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0cfb17a8-ecc2-4fa8-85e0-439d19b01b97" (UID: "0cfb17a8-ecc2-4fa8-85e0-439d19b01b97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.102346 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cfb17a8-ecc2-4fa8-85e0-439d19b01b97-config-data" (OuterVolumeSpecName: "config-data") pod "0cfb17a8-ecc2-4fa8-85e0-439d19b01b97" (UID: "0cfb17a8-ecc2-4fa8-85e0-439d19b01b97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.165746 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cfb17a8-ecc2-4fa8-85e0-439d19b01b97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.165773 4990 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0cfb17a8-ecc2-4fa8-85e0-439d19b01b97-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.165783 4990 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cfb17a8-ecc2-4fa8-85e0-439d19b01b97-logs\") on node \"crc\" DevicePath \"\"" Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.165791 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cfb17a8-ecc2-4fa8-85e0-439d19b01b97-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.165818 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lq5bp\" (UniqueName: \"kubernetes.io/projected/0cfb17a8-ecc2-4fa8-85e0-439d19b01b97-kube-api-access-lq5bp\") on node \"crc\" DevicePath \"\"" Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.267219 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e94da38c-b2d3-4ddb-b032-a6e5bfa62145-combined-ca-bundle\") pod \"e94da38c-b2d3-4ddb-b032-a6e5bfa62145\" (UID: \"e94da38c-b2d3-4ddb-b032-a6e5bfa62145\") " Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.267770 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e94da38c-b2d3-4ddb-b032-a6e5bfa62145-config-data\") pod \"e94da38c-b2d3-4ddb-b032-a6e5bfa62145\" (UID: \"e94da38c-b2d3-4ddb-b032-a6e5bfa62145\") " Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.267851 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e94da38c-b2d3-4ddb-b032-a6e5bfa62145-config-data-custom\") pod \"e94da38c-b2d3-4ddb-b032-a6e5bfa62145\" (UID: \"e94da38c-b2d3-4ddb-b032-a6e5bfa62145\") " Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.267928 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb64w\" (UniqueName: \"kubernetes.io/projected/e94da38c-b2d3-4ddb-b032-a6e5bfa62145-kube-api-access-mb64w\") pod \"e94da38c-b2d3-4ddb-b032-a6e5bfa62145\" (UID: \"e94da38c-b2d3-4ddb-b032-a6e5bfa62145\") " Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.267970 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e94da38c-b2d3-4ddb-b032-a6e5bfa62145-logs\") pod \"e94da38c-b2d3-4ddb-b032-a6e5bfa62145\" (UID: \"e94da38c-b2d3-4ddb-b032-a6e5bfa62145\") " Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.268621 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e94da38c-b2d3-4ddb-b032-a6e5bfa62145-logs" (OuterVolumeSpecName: "logs") pod "e94da38c-b2d3-4ddb-b032-a6e5bfa62145" (UID: "e94da38c-b2d3-4ddb-b032-a6e5bfa62145"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.271934 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e94da38c-b2d3-4ddb-b032-a6e5bfa62145-kube-api-access-mb64w" (OuterVolumeSpecName: "kube-api-access-mb64w") pod "e94da38c-b2d3-4ddb-b032-a6e5bfa62145" (UID: "e94da38c-b2d3-4ddb-b032-a6e5bfa62145"). InnerVolumeSpecName "kube-api-access-mb64w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.272724 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e94da38c-b2d3-4ddb-b032-a6e5bfa62145-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e94da38c-b2d3-4ddb-b032-a6e5bfa62145" (UID: "e94da38c-b2d3-4ddb-b032-a6e5bfa62145"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.292618 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e94da38c-b2d3-4ddb-b032-a6e5bfa62145-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e94da38c-b2d3-4ddb-b032-a6e5bfa62145" (UID: "e94da38c-b2d3-4ddb-b032-a6e5bfa62145"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.307404 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e94da38c-b2d3-4ddb-b032-a6e5bfa62145-config-data" (OuterVolumeSpecName: "config-data") pod "e94da38c-b2d3-4ddb-b032-a6e5bfa62145" (UID: "e94da38c-b2d3-4ddb-b032-a6e5bfa62145"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.369190 4990 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e94da38c-b2d3-4ddb-b032-a6e5bfa62145-logs\") on node \"crc\" DevicePath \"\"" Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.369217 4990 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e94da38c-b2d3-4ddb-b032-a6e5bfa62145-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.369227 4990 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e94da38c-b2d3-4ddb-b032-a6e5bfa62145-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.369235 4990 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e94da38c-b2d3-4ddb-b032-a6e5bfa62145-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.369245 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb64w\" (UniqueName: \"kubernetes.io/projected/e94da38c-b2d3-4ddb-b032-a6e5bfa62145-kube-api-access-mb64w\") on node \"crc\" DevicePath \"\"" Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.450826 4990 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.172:9292/healthcheck\": context deadline exceeded" Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.450855 4990 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="5ac5a3a4-d5a7-48bb-b7f4-a5723ebddb9d" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.172:9292/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.622796 4990 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod2c281c58-a95e-4669-bdfc-465759817928"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod2c281c58-a95e-4669-bdfc-465759817928] : Timed out while waiting for systemd to remove kubepods-besteffort-pod2c281c58_a95e_4669_bdfc_465759817928.slice" Dec 05 01:36:11 crc kubenswrapper[4990]: E1205 01:36:11.622852 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod2c281c58-a95e-4669-bdfc-465759817928] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod2c281c58-a95e-4669-bdfc-465759817928] : Timed out while waiting for systemd to remove kubepods-besteffort-pod2c281c58_a95e_4669_bdfc_465759817928.slice" pod="openstack/openstack-cell1-galera-0" podUID="2c281c58-a95e-4669-bdfc-465759817928" Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.829372 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7599ccc789-q6ldt" event={"ID":"e94da38c-b2d3-4ddb-b032-a6e5bfa62145","Type":"ContainerDied","Data":"1bfa86a2a543e49db9ce70b25b2c9ee336180c27c553d5a9fe316d9a59f914a0"} Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.829394 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7599ccc789-q6ldt" Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.829847 4990 scope.go:117] "RemoveContainer" containerID="cc5093cdab379f16035c70128d1dfeb6ff51bf6e5e36c4ec7ac5f5e887cecd99" Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.834885 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57775f7b86-mwzx9" event={"ID":"0cfb17a8-ecc2-4fa8-85e0-439d19b01b97","Type":"ContainerDied","Data":"86aec915fd0bfaf2805f86bc873ff8e7f3dc44a12704927bf1f47c0f5a96003d"} Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.834909 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.834986 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-57775f7b86-mwzx9" Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.873611 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.881056 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.885847 4990 scope.go:117] "RemoveContainer" containerID="4ebf1c229577d1f3c048b69998d4a50fa699eca8648698967f1adc59a1ef23d9" Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.893583 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-7599ccc789-q6ldt"] Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.900727 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-7599ccc789-q6ldt"] Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.913296 4990 scope.go:117] "RemoveContainer" containerID="8bc4083818a23adcc4860fdba5de05f1178e309898e0aed33be3be52454aeccc" Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.923293 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-57775f7b86-mwzx9"] Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.949065 4990 scope.go:117] "RemoveContainer" containerID="3fea20119ef314b71415f0e498137e3f7e618ac787a6fbcf0557233df608d6ed" Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.950122 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c281c58-a95e-4669-bdfc-465759817928" path="/var/lib/kubelet/pods/2c281c58-a95e-4669-bdfc-465759817928/volumes" Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.951070 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e94da38c-b2d3-4ddb-b032-a6e5bfa62145" path="/var/lib/kubelet/pods/e94da38c-b2d3-4ddb-b032-a6e5bfa62145/volumes" Dec 05 01:36:11 crc kubenswrapper[4990]: I1205 01:36:11.951729 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-57775f7b86-mwzx9"] Dec 05 01:36:13 crc kubenswrapper[4990]: I1205 01:36:13.183335 4990 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod510e9e75-fc35-4bed-8e71-c6e27069f50a"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod510e9e75-fc35-4bed-8e71-c6e27069f50a] : Timed out while waiting for systemd to remove kubepods-besteffort-pod510e9e75_fc35_4bed_8e71_c6e27069f50a.slice" Dec 05 01:36:13 crc kubenswrapper[4990]: E1205 01:36:13.183429 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod510e9e75-fc35-4bed-8e71-c6e27069f50a] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod510e9e75-fc35-4bed-8e71-c6e27069f50a] : Timed out while waiting for systemd to remove kubepods-besteffort-pod510e9e75_fc35_4bed_8e71_c6e27069f50a.slice" pod="openstack/kube-state-metrics-0" podUID="510e9e75-fc35-4bed-8e71-c6e27069f50a" Dec 05 01:36:13 crc kubenswrapper[4990]: I1205 01:36:13.858824 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 01:36:13 crc kubenswrapper[4990]: I1205 01:36:13.882680 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rd4gl" Dec 05 01:36:13 crc kubenswrapper[4990]: I1205 01:36:13.882752 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rd4gl" Dec 05 01:36:13 crc kubenswrapper[4990]: I1205 01:36:13.887434 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 01:36:13 crc kubenswrapper[4990]: I1205 01:36:13.898673 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 01:36:13 crc kubenswrapper[4990]: I1205 01:36:13.940639 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cfb17a8-ecc2-4fa8-85e0-439d19b01b97" path="/var/lib/kubelet/pods/0cfb17a8-ecc2-4fa8-85e0-439d19b01b97/volumes" Dec 05 01:36:13 crc kubenswrapper[4990]: I1205 01:36:13.941357 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="510e9e75-fc35-4bed-8e71-c6e27069f50a" path="/var/lib/kubelet/pods/510e9e75-fc35-4bed-8e71-c6e27069f50a/volumes" Dec 05 01:36:13 crc kubenswrapper[4990]: I1205 01:36:13.952895 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rd4gl" Dec 05 01:36:14 crc kubenswrapper[4990]: I1205 01:36:14.512427 4990 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","poded473a7a-f068-49a3-ae4c-b57b39e33b28"] err="unable to destroy cgroup paths for cgroup [kubepods burstable poded473a7a-f068-49a3-ae4c-b57b39e33b28] : Timed out while waiting for systemd to remove kubepods-burstable-poded473a7a_f068_49a3_ae4c_b57b39e33b28.slice" Dec 05 01:36:14 crc kubenswrapper[4990]: I1205 01:36:14.517720 4990 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod0dc80822-8cd5-4004-abdd-160ad6dcdd72"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod0dc80822-8cd5-4004-abdd-160ad6dcdd72] : Timed out while waiting for systemd to remove kubepods-besteffort-pod0dc80822_8cd5_4004_abdd_160ad6dcdd72.slice" Dec 05 01:36:14 crc kubenswrapper[4990]: I1205 01:36:14.947366 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rd4gl" Dec 05 01:36:15 crc kubenswrapper[4990]: I1205 01:36:15.019515 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rd4gl"] Dec 05 01:36:16 crc kubenswrapper[4990]: I1205 01:36:16.892650 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rd4gl" podUID="fbc42c3f-ae48-43a4-8f55-23efb52a86de" containerName="registry-server" containerID="cri-o://774b83ca936a1fd6812b50813ed422f8eef60bdcc8cdf5b8e526bed0c856a469" gracePeriod=2 Dec 05 01:36:17 crc kubenswrapper[4990]: I1205 01:36:17.884075 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rd4gl" Dec 05 01:36:17 crc kubenswrapper[4990]: I1205 01:36:17.902145 4990 generic.go:334] "Generic (PLEG): container finished" podID="fbc42c3f-ae48-43a4-8f55-23efb52a86de" containerID="774b83ca936a1fd6812b50813ed422f8eef60bdcc8cdf5b8e526bed0c856a469" exitCode=0 Dec 05 01:36:17 crc kubenswrapper[4990]: I1205 01:36:17.902181 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rd4gl" event={"ID":"fbc42c3f-ae48-43a4-8f55-23efb52a86de","Type":"ContainerDied","Data":"774b83ca936a1fd6812b50813ed422f8eef60bdcc8cdf5b8e526bed0c856a469"} Dec 05 01:36:17 crc kubenswrapper[4990]: I1205 01:36:17.902211 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rd4gl" Dec 05 01:36:17 crc kubenswrapper[4990]: I1205 01:36:17.902241 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rd4gl" event={"ID":"fbc42c3f-ae48-43a4-8f55-23efb52a86de","Type":"ContainerDied","Data":"4e770847aebe0f232ad9aa2217f2a672b5a1877bc624dd1a7b073b6cad8bd7f2"} Dec 05 01:36:17 crc kubenswrapper[4990]: I1205 01:36:17.902263 4990 scope.go:117] "RemoveContainer" containerID="774b83ca936a1fd6812b50813ed422f8eef60bdcc8cdf5b8e526bed0c856a469" Dec 05 01:36:17 crc kubenswrapper[4990]: I1205 01:36:17.921568 4990 scope.go:117] "RemoveContainer" containerID="4577ae79bbb40a01f9e4dcd0146adad458b36bd44f37bc3931db20bb005327de" Dec 05 01:36:17 crc kubenswrapper[4990]: I1205 01:36:17.941322 4990 scope.go:117] "RemoveContainer" containerID="d6ac16ff13b21c977e5a4f476dd547480587e5a757bfe4e017f04cfd7dcc87be" Dec 05 01:36:17 crc kubenswrapper[4990]: I1205 01:36:17.961814 4990 scope.go:117] "RemoveContainer" containerID="774b83ca936a1fd6812b50813ed422f8eef60bdcc8cdf5b8e526bed0c856a469" Dec 05 01:36:17 crc kubenswrapper[4990]: E1205 01:36:17.962382 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"774b83ca936a1fd6812b50813ed422f8eef60bdcc8cdf5b8e526bed0c856a469\": container with ID starting with 774b83ca936a1fd6812b50813ed422f8eef60bdcc8cdf5b8e526bed0c856a469 not found: ID does not exist" containerID="774b83ca936a1fd6812b50813ed422f8eef60bdcc8cdf5b8e526bed0c856a469" Dec 05 01:36:17 crc kubenswrapper[4990]: I1205 01:36:17.962424 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"774b83ca936a1fd6812b50813ed422f8eef60bdcc8cdf5b8e526bed0c856a469"} err="failed to get container status \"774b83ca936a1fd6812b50813ed422f8eef60bdcc8cdf5b8e526bed0c856a469\": rpc error: code = NotFound desc = could not find container \"774b83ca936a1fd6812b50813ed422f8eef60bdcc8cdf5b8e526bed0c856a469\": container with ID starting with 774b83ca936a1fd6812b50813ed422f8eef60bdcc8cdf5b8e526bed0c856a469 not found: ID does not exist" Dec 05 01:36:17 crc kubenswrapper[4990]: I1205 01:36:17.962452 4990 scope.go:117] "RemoveContainer" containerID="4577ae79bbb40a01f9e4dcd0146adad458b36bd44f37bc3931db20bb005327de" Dec 05 01:36:17 crc kubenswrapper[4990]: E1205 01:36:17.962762 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4577ae79bbb40a01f9e4dcd0146adad458b36bd44f37bc3931db20bb005327de\": container with ID starting with 4577ae79bbb40a01f9e4dcd0146adad458b36bd44f37bc3931db20bb005327de not found: ID does not exist" containerID="4577ae79bbb40a01f9e4dcd0146adad458b36bd44f37bc3931db20bb005327de" Dec 05 01:36:17 crc kubenswrapper[4990]: I1205 01:36:17.962804 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4577ae79bbb40a01f9e4dcd0146adad458b36bd44f37bc3931db20bb005327de"} err="failed to get container status \"4577ae79bbb40a01f9e4dcd0146adad458b36bd44f37bc3931db20bb005327de\": rpc error: code = NotFound desc = could not find container \"4577ae79bbb40a01f9e4dcd0146adad458b36bd44f37bc3931db20bb005327de\": container with ID starting with 4577ae79bbb40a01f9e4dcd0146adad458b36bd44f37bc3931db20bb005327de not found: ID does not exist" Dec 05 01:36:17 crc kubenswrapper[4990]: I1205 01:36:17.962834 4990 scope.go:117] "RemoveContainer" containerID="d6ac16ff13b21c977e5a4f476dd547480587e5a757bfe4e017f04cfd7dcc87be" Dec 05 01:36:17 crc kubenswrapper[4990]: E1205 01:36:17.963213 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6ac16ff13b21c977e5a4f476dd547480587e5a757bfe4e017f04cfd7dcc87be\": container with ID starting with d6ac16ff13b21c977e5a4f476dd547480587e5a757bfe4e017f04cfd7dcc87be not found: ID does not exist" containerID="d6ac16ff13b21c977e5a4f476dd547480587e5a757bfe4e017f04cfd7dcc87be" Dec 05 01:36:17 crc kubenswrapper[4990]: I1205 01:36:17.963240 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6ac16ff13b21c977e5a4f476dd547480587e5a757bfe4e017f04cfd7dcc87be"} err="failed to get container status \"d6ac16ff13b21c977e5a4f476dd547480587e5a757bfe4e017f04cfd7dcc87be\": rpc error: code = NotFound desc = could not find container \"d6ac16ff13b21c977e5a4f476dd547480587e5a757bfe4e017f04cfd7dcc87be\": container with ID starting with d6ac16ff13b21c977e5a4f476dd547480587e5a757bfe4e017f04cfd7dcc87be not found: ID does not exist" Dec 05 01:36:17 crc kubenswrapper[4990]: I1205 01:36:17.977889 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbc42c3f-ae48-43a4-8f55-23efb52a86de-utilities\") pod \"fbc42c3f-ae48-43a4-8f55-23efb52a86de\" (UID: \"fbc42c3f-ae48-43a4-8f55-23efb52a86de\") " Dec 05 01:36:17 crc kubenswrapper[4990]: I1205 01:36:17.977920 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjng7\" (UniqueName: \"kubernetes.io/projected/fbc42c3f-ae48-43a4-8f55-23efb52a86de-kube-api-access-pjng7\") pod \"fbc42c3f-ae48-43a4-8f55-23efb52a86de\" (UID: \"fbc42c3f-ae48-43a4-8f55-23efb52a86de\") " Dec 05 01:36:17 crc kubenswrapper[4990]: I1205 01:36:17.977937 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbc42c3f-ae48-43a4-8f55-23efb52a86de-catalog-content\") pod \"fbc42c3f-ae48-43a4-8f55-23efb52a86de\" (UID: \"fbc42c3f-ae48-43a4-8f55-23efb52a86de\") " Dec 05 01:36:17 crc kubenswrapper[4990]: I1205 01:36:17.979101 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbc42c3f-ae48-43a4-8f55-23efb52a86de-utilities" (OuterVolumeSpecName: "utilities") pod "fbc42c3f-ae48-43a4-8f55-23efb52a86de" (UID: "fbc42c3f-ae48-43a4-8f55-23efb52a86de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:36:17 crc kubenswrapper[4990]: I1205 01:36:17.983504 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbc42c3f-ae48-43a4-8f55-23efb52a86de-kube-api-access-pjng7" (OuterVolumeSpecName: "kube-api-access-pjng7") pod "fbc42c3f-ae48-43a4-8f55-23efb52a86de" (UID: "fbc42c3f-ae48-43a4-8f55-23efb52a86de"). InnerVolumeSpecName "kube-api-access-pjng7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:36:18 crc kubenswrapper[4990]: I1205 01:36:18.025944 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbc42c3f-ae48-43a4-8f55-23efb52a86de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fbc42c3f-ae48-43a4-8f55-23efb52a86de" (UID: "fbc42c3f-ae48-43a4-8f55-23efb52a86de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:36:18 crc kubenswrapper[4990]: I1205 01:36:18.079883 4990 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbc42c3f-ae48-43a4-8f55-23efb52a86de-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 01:36:18 crc kubenswrapper[4990]: I1205 01:36:18.080559 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjng7\" (UniqueName: \"kubernetes.io/projected/fbc42c3f-ae48-43a4-8f55-23efb52a86de-kube-api-access-pjng7\") on node \"crc\" DevicePath \"\"" Dec 05 01:36:18 crc kubenswrapper[4990]: I1205 01:36:18.080600 4990 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbc42c3f-ae48-43a4-8f55-23efb52a86de-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 01:36:18 crc kubenswrapper[4990]: I1205 01:36:18.255540 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rd4gl"] Dec 05 01:36:18 crc kubenswrapper[4990]: I1205 01:36:18.266414 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rd4gl"] Dec 05 01:36:19 crc kubenswrapper[4990]: I1205 01:36:19.956161 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbc42c3f-ae48-43a4-8f55-23efb52a86de" path="/var/lib/kubelet/pods/fbc42c3f-ae48-43a4-8f55-23efb52a86de/volumes" Dec 05 01:36:21 crc kubenswrapper[4990]: I1205 01:36:21.823610 4990 patch_prober.go:28] interesting pod/machine-config-daemon-zxlh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 01:36:21 crc kubenswrapper[4990]: I1205 01:36:21.824077 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 01:36:21 crc kubenswrapper[4990]: I1205 01:36:21.824163 4990 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" Dec 05 01:36:21 crc kubenswrapper[4990]: I1205 01:36:21.825126 4990 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2d4ea228591d9ae7abab2d6300a8c28ea985494d462aee9ebadaead9d728a86f"} pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 01:36:21 crc kubenswrapper[4990]: I1205 01:36:21.825248 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" containerID="cri-o://2d4ea228591d9ae7abab2d6300a8c28ea985494d462aee9ebadaead9d728a86f" gracePeriod=600 Dec 05 01:36:21 crc kubenswrapper[4990]: E1205 01:36:21.991403 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:36:22 crc kubenswrapper[4990]: I1205 01:36:22.018205 4990 generic.go:334] "Generic (PLEG): container finished" podID="b6580a04-67de-48f9-9da2-56cb4377af48" containerID="2d4ea228591d9ae7abab2d6300a8c28ea985494d462aee9ebadaead9d728a86f" exitCode=0 Dec 05 01:36:22 crc kubenswrapper[4990]: I1205 01:36:22.018261 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" event={"ID":"b6580a04-67de-48f9-9da2-56cb4377af48","Type":"ContainerDied","Data":"2d4ea228591d9ae7abab2d6300a8c28ea985494d462aee9ebadaead9d728a86f"} Dec 05 01:36:22 crc kubenswrapper[4990]: I1205 01:36:22.018306 4990 scope.go:117] "RemoveContainer" containerID="5555ce4abbfedb686ddef6d7dce409f40c947a09fec383b5821b1209ff394208" Dec 05 01:36:22 crc kubenswrapper[4990]: I1205 01:36:22.018735 4990 scope.go:117] "RemoveContainer" containerID="2d4ea228591d9ae7abab2d6300a8c28ea985494d462aee9ebadaead9d728a86f" Dec 05 01:36:22 crc kubenswrapper[4990]: E1205 01:36:22.018952 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:36:35 crc kubenswrapper[4990]: I1205 01:36:35.930731 4990 scope.go:117] "RemoveContainer" containerID="2d4ea228591d9ae7abab2d6300a8c28ea985494d462aee9ebadaead9d728a86f" Dec 05 01:36:35 crc kubenswrapper[4990]: E1205 01:36:35.931737 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:36:46 crc kubenswrapper[4990]: I1205 01:36:46.931109 4990 scope.go:117] "RemoveContainer" containerID="2d4ea228591d9ae7abab2d6300a8c28ea985494d462aee9ebadaead9d728a86f" Dec 05 01:36:46 crc kubenswrapper[4990]: E1205 01:36:46.932298 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:37:00 crc kubenswrapper[4990]: I1205 01:37:00.930814 4990 scope.go:117] "RemoveContainer" containerID="2d4ea228591d9ae7abab2d6300a8c28ea985494d462aee9ebadaead9d728a86f" Dec 05 01:37:00 crc kubenswrapper[4990]: E1205 01:37:00.931447 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:37:11 crc kubenswrapper[4990]: I1205 01:37:11.936768 4990 scope.go:117] "RemoveContainer" containerID="2d4ea228591d9ae7abab2d6300a8c28ea985494d462aee9ebadaead9d728a86f" Dec 05 01:37:11 crc kubenswrapper[4990]: E1205 01:37:11.937617 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:37:24 crc kubenswrapper[4990]: I1205 01:37:24.930879 4990 scope.go:117] "RemoveContainer" containerID="2d4ea228591d9ae7abab2d6300a8c28ea985494d462aee9ebadaead9d728a86f" Dec 05 01:37:24 crc kubenswrapper[4990]: E1205 01:37:24.932113 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:37:30 crc kubenswrapper[4990]: I1205 01:37:30.453310 4990 scope.go:117] "RemoveContainer" containerID="4ac1fae93e9ab2518d2e7ad50e12a39ab7f54659cf30e784b78090d0efc59346" Dec 05 01:37:30 crc kubenswrapper[4990]: I1205 01:37:30.487046 4990 scope.go:117] "RemoveContainer" containerID="bf23153d47795f08a1beb8bebe5fd81358e0773080569922f18d9cf836a35d62" Dec 05 01:37:30 crc kubenswrapper[4990]: I1205 01:37:30.531608 4990 scope.go:117] "RemoveContainer" containerID="6a124f2ceb58f1b28fd7e33d50fc28756c66696a4774e8efa70e6a53e7a97329" Dec 05 01:37:30 crc kubenswrapper[4990]: I1205 01:37:30.558994 4990 scope.go:117] "RemoveContainer" containerID="50f546f38d10fde078eb863526aabded8f57ec98bea69e0072993fcbb5df0aa5" Dec 05 01:37:30 crc kubenswrapper[4990]: I1205 01:37:30.583977 4990 scope.go:117] "RemoveContainer" containerID="d9bd1be2f40dad0b0e75e9a6eb2169029c19d587cfd8318968d95a88ed6d8e32" Dec 05 01:37:30 crc kubenswrapper[4990]: I1205 01:37:30.606052 4990 scope.go:117] "RemoveContainer" containerID="cc14d93c184af156cb1266a8caa50ed4d94fad3578f8bdedeface96926975500" Dec 05 01:37:30 crc kubenswrapper[4990]: I1205 01:37:30.630162 4990 scope.go:117] "RemoveContainer" containerID="6290c870625c2bc24d2bbf7c61e3acaf0e8d3f3f7f3e22832b84d2f5bf16b234" Dec 05 01:37:30 crc kubenswrapper[4990]: I1205 01:37:30.657694 4990 scope.go:117] "RemoveContainer" containerID="b422db709e2c315f6b31eb654667cb25b1c749864ca47aab010243422f9b6ce3" Dec 05 01:37:30 crc kubenswrapper[4990]: I1205 01:37:30.688191 4990 scope.go:117] "RemoveContainer" containerID="eea0a9d2df646aa4abcbe7cfcbbe171d69217dd0f047493b8794dd18a5edc8c6" Dec 05 01:37:30 crc kubenswrapper[4990]: I1205 01:37:30.715924 4990 scope.go:117] "RemoveContainer" containerID="1337738b96b97c494ef162bac005232edc2ea2057d25e1bca729e2912f0fc44b" Dec 05 01:37:30 crc kubenswrapper[4990]: I1205 01:37:30.745912 4990 scope.go:117] "RemoveContainer" containerID="4ab656dfc2d01202160cd720237d9949e5273fcbe6945b48b1b1bc3f7a17c02a" Dec 05 01:37:30 crc kubenswrapper[4990]: I1205 01:37:30.782544 4990 scope.go:117] "RemoveContainer" containerID="4dc2b7b58dbdfb63d630c0e7a057d062e10619c16da2cfb39d4635ab0061b0d1" Dec 05 01:37:36 crc kubenswrapper[4990]: I1205 01:37:36.930770 4990 scope.go:117] "RemoveContainer" containerID="2d4ea228591d9ae7abab2d6300a8c28ea985494d462aee9ebadaead9d728a86f" Dec 05 01:37:36 crc kubenswrapper[4990]: E1205 01:37:36.931906 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:37:50 crc kubenswrapper[4990]: I1205 01:37:50.931280 4990 scope.go:117] "RemoveContainer" containerID="2d4ea228591d9ae7abab2d6300a8c28ea985494d462aee9ebadaead9d728a86f" Dec 05 01:37:50 crc kubenswrapper[4990]: E1205 01:37:50.932462 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:38:01 crc kubenswrapper[4990]: I1205 01:38:01.936559 4990 scope.go:117] "RemoveContainer" containerID="2d4ea228591d9ae7abab2d6300a8c28ea985494d462aee9ebadaead9d728a86f" Dec 05 01:38:01 crc kubenswrapper[4990]: E1205 01:38:01.937291 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:38:12 crc kubenswrapper[4990]: I1205 01:38:12.930301 4990 scope.go:117] "RemoveContainer" containerID="2d4ea228591d9ae7abab2d6300a8c28ea985494d462aee9ebadaead9d728a86f" Dec 05 01:38:12 crc kubenswrapper[4990]: E1205 01:38:12.931717 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:38:25 crc kubenswrapper[4990]: I1205 01:38:25.930995 4990 scope.go:117] "RemoveContainer" containerID="2d4ea228591d9ae7abab2d6300a8c28ea985494d462aee9ebadaead9d728a86f" Dec 05 01:38:25 crc kubenswrapper[4990]: E1205 01:38:25.932047 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:38:31 crc kubenswrapper[4990]: I1205 01:38:31.000207 4990 scope.go:117] "RemoveContainer" containerID="fdd9caffdc54e14f6ca95d0eaa8abd10e382660103a959bd1a6e1e4de288814f" Dec 05 01:38:31 crc kubenswrapper[4990]: I1205 01:38:31.035226 4990 scope.go:117] "RemoveContainer" containerID="3c9dc794e63045f795ad588361662d0beeff2af8e7ed2a579125f1e2b198d8d5" Dec 05 01:38:31 crc kubenswrapper[4990]: I1205 01:38:31.087556 4990 scope.go:117] "RemoveContainer" containerID="47cb92293894b144fb1235b48e3276d87fdd7829c486d8d12d6658a335ec3215" Dec 05 01:38:31 crc kubenswrapper[4990]: I1205 01:38:31.137515 4990 scope.go:117] "RemoveContainer" containerID="44c22461d8201321942c5ec130ec3a61ad61285b0502ddaeb081925c0921d588" Dec 05 01:38:31 crc kubenswrapper[4990]: I1205 01:38:31.167071 4990 scope.go:117] "RemoveContainer" containerID="3799513540bbb8d8c3fcf656488ca88981ad8aa1bbb031424d8330e5f55dfe03" Dec 05 01:38:31 crc kubenswrapper[4990]: I1205 01:38:31.209176 4990 scope.go:117] "RemoveContainer" containerID="60103cd0845a8ee4ac3a9f4a4b4913b499c5becaac1a1bf97f551b44867160a5" Dec 05 01:38:31 crc kubenswrapper[4990]: I1205 01:38:31.242696 4990 scope.go:117] "RemoveContainer" containerID="f51a457180e6f16972fd89d552eb4c0e31055f62c38ccb33930acf53f64f2815" Dec 05 01:38:31 crc kubenswrapper[4990]: I1205 01:38:31.268295 4990 scope.go:117] "RemoveContainer" containerID="c24f96e022dad61d2024dca7580d0d23a60db66f66c1ebe8fdb46bbc6050c570" Dec 05 01:38:31 crc kubenswrapper[4990]: I1205 01:38:31.288237 4990 scope.go:117] "RemoveContainer" containerID="03f57bdc635d64e1856258014678128b8246e48a437406c4f7954d39ab078ccb" Dec 05 01:38:31 crc kubenswrapper[4990]: I1205 01:38:31.311972 4990 scope.go:117] "RemoveContainer" containerID="959c98d753d2f71fdcd4c37382a282b3f4af36529378c1080b16d158907fa142" Dec 05 01:38:31 crc kubenswrapper[4990]: I1205 01:38:31.332845 4990 scope.go:117] "RemoveContainer" containerID="1301884c7cfb4fdd1cf59676cfbbdb5767dae769ecedc426439c67dfdc545613" Dec 05 01:38:31 crc kubenswrapper[4990]: I1205 01:38:31.350596 4990 scope.go:117] "RemoveContainer" containerID="93c1c5b43e3e314a2c0392ec6bff92ab1b0d03d6e4c4d5249b0654ad3fee74c1" Dec 05 01:38:31 crc kubenswrapper[4990]: I1205 01:38:31.383469 4990 scope.go:117] "RemoveContainer" containerID="276d846fcb3add4feecfd49dda876e7767dd3082e126a6095d86ceb7aaba3127" Dec 05 01:38:31 crc kubenswrapper[4990]: I1205 01:38:31.424171 4990 scope.go:117] "RemoveContainer" containerID="340420a2025d54544eda878133b62062f1bf33394109ba9351def100e40307e9" Dec 05 01:38:31 crc kubenswrapper[4990]: I1205 01:38:31.455264 4990 scope.go:117] "RemoveContainer" containerID="5d677ac4cf7f17763cb57fdbd241dbbc43ee718b5236104aeaebafadb2a7637a" Dec 05 01:38:37 crc kubenswrapper[4990]: I1205 01:38:37.931723 4990 scope.go:117] "RemoveContainer" containerID="2d4ea228591d9ae7abab2d6300a8c28ea985494d462aee9ebadaead9d728a86f" Dec 05 01:38:37 crc kubenswrapper[4990]: E1205 01:38:37.932904 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:38:50 crc kubenswrapper[4990]: I1205 01:38:50.931209 4990 scope.go:117] "RemoveContainer" containerID="2d4ea228591d9ae7abab2d6300a8c28ea985494d462aee9ebadaead9d728a86f" Dec 05 01:38:50 crc kubenswrapper[4990]: E1205 01:38:50.932336 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:39:03 crc kubenswrapper[4990]: I1205 01:39:03.931299 4990 scope.go:117] "RemoveContainer" containerID="2d4ea228591d9ae7abab2d6300a8c28ea985494d462aee9ebadaead9d728a86f" Dec 05 01:39:03 crc kubenswrapper[4990]: E1205 01:39:03.932429 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:39:15 crc kubenswrapper[4990]: I1205 01:39:15.934096 4990 scope.go:117] "RemoveContainer" containerID="2d4ea228591d9ae7abab2d6300a8c28ea985494d462aee9ebadaead9d728a86f" Dec 05 01:39:15 crc kubenswrapper[4990]: E1205 01:39:15.937381 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:39:30 crc kubenswrapper[4990]: I1205 01:39:30.931306 4990 scope.go:117] "RemoveContainer" containerID="2d4ea228591d9ae7abab2d6300a8c28ea985494d462aee9ebadaead9d728a86f" Dec 05 01:39:30 crc kubenswrapper[4990]: E1205 01:39:30.932649 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:39:31 crc kubenswrapper[4990]: I1205 01:39:31.733280 4990 scope.go:117] "RemoveContainer" containerID="bfc2d9d9367d1796c1c148e97af6bf3f14bd34ece3eba129aeb8e30655978558" Dec 05 01:39:31 crc kubenswrapper[4990]: I1205 01:39:31.772317 4990 scope.go:117] "RemoveContainer" containerID="7096252d13b83e4d071d02b3dd7ec8510bf0890112154f967337d7814d1d45f9" Dec 05 01:39:31 crc kubenswrapper[4990]: I1205 01:39:31.812003 4990 scope.go:117] "RemoveContainer" containerID="90907eab6e43a67e9dd116d95af526faeb9a66ef61e70f7e11881689a35c73d5" Dec 05 01:39:31 crc kubenswrapper[4990]: I1205 01:39:31.843812 4990 scope.go:117] "RemoveContainer" containerID="6f3878a570f1cef13134c39eefe7ab105ee108418b035d9ba31b4cc89571e005" Dec 05 01:39:31 crc kubenswrapper[4990]: I1205 01:39:31.877952 4990 scope.go:117] "RemoveContainer" containerID="84eda9e236cf04f0c78610e9e447f88ae6f7b12e26fe33aa20954f632cbbdd2c" Dec 05 01:39:31 crc kubenswrapper[4990]: I1205 01:39:31.928227 4990 scope.go:117] "RemoveContainer" containerID="75a01a2a625f0f4818f4355774ffa785f189c55650887e146da7dd75eb006af5" Dec 05 01:39:31 crc kubenswrapper[4990]: I1205 01:39:31.958354 4990 scope.go:117] "RemoveContainer" containerID="47ee167b0a8a9940f0690f6d46577d988195ffd84ef5e376de9c87b35275956b" Dec 05 01:39:31 crc kubenswrapper[4990]: I1205 01:39:31.990535 4990 scope.go:117] "RemoveContainer" containerID="5f490bc39eb1a824091b54123c3705eafc0ddd4d3bb92d574be2d6b179034a7a" Dec 05 01:39:32 crc kubenswrapper[4990]: I1205 01:39:32.018315 4990 scope.go:117] "RemoveContainer" containerID="53b0bfd7b2462543f87d772d7dfa037728fcbd1160cbcca19896f5064f9a4067" Dec 05 01:39:32 crc kubenswrapper[4990]: I1205 01:39:32.041518 4990 scope.go:117] "RemoveContainer" containerID="c4fc68382e82bbd65c9807f60fda89243ec97c97906e4b3b2e244ffc30c13392" Dec 05 01:39:32 crc kubenswrapper[4990]: I1205 01:39:32.068217 4990 scope.go:117] "RemoveContainer" containerID="6699c1e65bbec24e9ec9e853d5d34df75d7211700fe2400657624212ab90c757" Dec 05 01:39:32 crc kubenswrapper[4990]: I1205 01:39:32.106233 4990 scope.go:117] "RemoveContainer" containerID="2c37fd99ba8ed3daa5da82a0119a54cd33959a588276037e6fc12867a26ee9ed" Dec 05 01:39:32 crc kubenswrapper[4990]: I1205 01:39:32.128553 4990 scope.go:117] "RemoveContainer" containerID="82eb58ebe7ffe6cca157c0c411fe2fb1cb6d998e427d96fff54c39ba5fae459b" Dec 05 01:39:32 crc kubenswrapper[4990]: I1205 01:39:32.154379 4990 scope.go:117] "RemoveContainer" containerID="d3c29bdfe7061a79af10402934bcafa151f9c05ededba5e2abcfbccb51695e00" Dec 05 01:39:32 crc kubenswrapper[4990]: I1205 01:39:32.183419 4990 scope.go:117] "RemoveContainer" containerID="a6a0b8906f99b50879cb560fb51bbf9c450509d546577e95f54a1e3df9551cd6" Dec 05 01:39:43 crc kubenswrapper[4990]: I1205 01:39:43.931732 4990 scope.go:117] "RemoveContainer" containerID="2d4ea228591d9ae7abab2d6300a8c28ea985494d462aee9ebadaead9d728a86f" Dec 05 01:39:43 crc kubenswrapper[4990]: E1205 01:39:43.932711 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:39:58 crc kubenswrapper[4990]: I1205 01:39:58.930727 4990 scope.go:117] "RemoveContainer" containerID="2d4ea228591d9ae7abab2d6300a8c28ea985494d462aee9ebadaead9d728a86f" Dec 05 01:39:58 crc kubenswrapper[4990]: E1205 01:39:58.931829 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:40:12 crc kubenswrapper[4990]: I1205 01:40:12.930504 4990 scope.go:117] "RemoveContainer" containerID="2d4ea228591d9ae7abab2d6300a8c28ea985494d462aee9ebadaead9d728a86f" Dec 05 01:40:12 crc kubenswrapper[4990]: E1205 01:40:12.932125 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:40:25 crc kubenswrapper[4990]: I1205 01:40:25.930374 4990 scope.go:117] "RemoveContainer" containerID="2d4ea228591d9ae7abab2d6300a8c28ea985494d462aee9ebadaead9d728a86f" Dec 05 01:40:25 crc kubenswrapper[4990]: E1205 01:40:25.931899 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:40:32 crc kubenswrapper[4990]: I1205 01:40:32.471206 4990 scope.go:117] "RemoveContainer" containerID="96311983bd4bbe76a84ad5addf79e1a778ba9e353f3375b6a5513195e6d9b82e" Dec 05 01:40:32 crc kubenswrapper[4990]: I1205 01:40:32.507718 4990 scope.go:117] "RemoveContainer" containerID="c189c80e4d1e8c0cb40b43bd6f7cfdd9ca6fd3bf68e0d1a4e9bcc9967adf47ac" Dec 05 01:40:32 crc kubenswrapper[4990]: I1205 01:40:32.533974 4990 scope.go:117] "RemoveContainer" containerID="355f508a6ba74b579c0a1d7879d2f885a7d91996f956db831c47eaff9fdff134" Dec 05 01:40:32 crc kubenswrapper[4990]: I1205 01:40:32.561219 4990 scope.go:117] "RemoveContainer" containerID="33e18624487a0649c17fcc4bee3e82daec971c85efad2a6add23dc9afee24ab3" Dec 05 01:40:32 crc kubenswrapper[4990]: I1205 01:40:32.589728 4990 scope.go:117] "RemoveContainer" containerID="20a15ff7d9ef03afb2290c70568d59c0cf2c772ab1f191fa1f665fe5087dd5d7" Dec 05 01:40:32 crc kubenswrapper[4990]: I1205 01:40:32.649815 4990 scope.go:117] "RemoveContainer" containerID="99e3f4b0483634358a7d1235ce5eb8570a7f7ed07fa299cea8aa4652c97c14e8" Dec 05 01:40:32 crc kubenswrapper[4990]: I1205 01:40:32.690070 4990 scope.go:117] "RemoveContainer" containerID="c95ffe76a5951330701a50f1c0be95fb26965f5c6b814919400348bb0cf716fe" Dec 05 01:40:32 crc kubenswrapper[4990]: I1205 01:40:32.718974 4990 scope.go:117] "RemoveContainer" containerID="69a5aabd9213d2ac0aa5f6ec4c7060d9bffab325b22fa8faccce3de34ef2b5a3" Dec 05 01:40:37 crc kubenswrapper[4990]: I1205 01:40:37.930410 4990 scope.go:117] "RemoveContainer" containerID="2d4ea228591d9ae7abab2d6300a8c28ea985494d462aee9ebadaead9d728a86f" Dec 05 01:40:37 crc kubenswrapper[4990]: E1205 01:40:37.930884 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:40:52 crc kubenswrapper[4990]: I1205 01:40:52.931828 4990 scope.go:117] "RemoveContainer" containerID="2d4ea228591d9ae7abab2d6300a8c28ea985494d462aee9ebadaead9d728a86f" Dec 05 01:40:52 crc kubenswrapper[4990]: E1205 01:40:52.932660 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:41:05 crc kubenswrapper[4990]: I1205 01:41:05.930303 4990 scope.go:117] "RemoveContainer" containerID="2d4ea228591d9ae7abab2d6300a8c28ea985494d462aee9ebadaead9d728a86f" Dec 05 01:41:05 crc kubenswrapper[4990]: E1205 01:41:05.931297 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:41:16 crc kubenswrapper[4990]: I1205 01:41:16.930564 4990 scope.go:117] "RemoveContainer" containerID="2d4ea228591d9ae7abab2d6300a8c28ea985494d462aee9ebadaead9d728a86f" Dec 05 01:41:16 crc kubenswrapper[4990]: E1205 01:41:16.931567 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:41:27 crc kubenswrapper[4990]: I1205 01:41:27.933222 4990 scope.go:117] "RemoveContainer" containerID="2d4ea228591d9ae7abab2d6300a8c28ea985494d462aee9ebadaead9d728a86f" Dec 05 01:41:28 crc kubenswrapper[4990]: I1205 01:41:28.530315 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" event={"ID":"b6580a04-67de-48f9-9da2-56cb4377af48","Type":"ContainerStarted","Data":"332c1c66c0c63ad31a8eb8eb91157daf8f3f3a515aaf78c4c65bb7a7320c8f26"} Dec 05 01:41:32 crc kubenswrapper[4990]: I1205 01:41:32.891517 4990 scope.go:117] "RemoveContainer" containerID="95871264dddedee0223bf43470e710503cc4d9eb3d22d3ebef3d08484f77a4e6" Dec 05 01:41:32 crc kubenswrapper[4990]: I1205 01:41:32.927996 4990 scope.go:117] "RemoveContainer" containerID="1da14eb63196d0ca1b50f0e559638781676ddc745754bd656a71b4fcad75d292" Dec 05 01:41:32 crc kubenswrapper[4990]: I1205 01:41:32.965220 4990 scope.go:117] "RemoveContainer" containerID="8eb62300cc3ccbd37e11d39589f93dfecbfed82d1d1d22eb835f940823d41073" Dec 05 01:41:32 crc kubenswrapper[4990]: I1205 01:41:32.983941 4990 scope.go:117] "RemoveContainer" containerID="a6c8921d8a3c62aed69725cab2690e2cd5481b1e2d2f6d5631d6f5f5be266d43" Dec 05 01:42:33 crc kubenswrapper[4990]: I1205 01:42:33.079957 4990 scope.go:117] "RemoveContainer" containerID="3511ef3c763384814d89ad4ea38df77c466fcbb49bbb254ff4f1acd94b9088c2" Dec 05 01:42:33 crc kubenswrapper[4990]: I1205 01:42:33.128351 4990 scope.go:117] "RemoveContainer" containerID="f32cb51b220cae56d356be4dd5d1fa30a40674bac0da14eb72c6e3c748fd23b4" Dec 05 01:42:33 crc kubenswrapper[4990]: I1205 01:42:33.167965 4990 scope.go:117] "RemoveContainer" containerID="bfd26f3b6ec2e81ecb4f3ded5d8569c766eac173e5de019ab3d80eb2d6da588e" Dec 05 01:42:33 crc kubenswrapper[4990]: I1205 01:42:33.203467 4990 scope.go:117] "RemoveContainer" containerID="b6bf2bb125def428a4d480d2bbbd6133314c5ca3fe6fc81390c342688ce6ce20" Dec 05 01:42:33 crc kubenswrapper[4990]: I1205 01:42:33.244320 4990 scope.go:117] "RemoveContainer" containerID="d02c97f717b88563cf577b01b6bef8cd4858482f699b83682b846f62dbb607ec" Dec 05 01:42:33 crc kubenswrapper[4990]: I1205 01:42:33.284452 4990 scope.go:117] "RemoveContainer" containerID="6d4760603a654f0855f0b6eefff463312ee7c990e1aafe561c11bc0ce58c2194" Dec 05 01:42:33 crc kubenswrapper[4990]: I1205 01:42:33.310871 4990 scope.go:117] "RemoveContainer" containerID="1648c0a8495c50ad4a23e1a6ed5f5b8f6c1c09dccf4ff6d9981248049ef54dbf" Dec 05 01:43:51 crc kubenswrapper[4990]: I1205 01:43:51.823916 4990 patch_prober.go:28] interesting pod/machine-config-daemon-zxlh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 01:43:51 crc kubenswrapper[4990]: I1205 01:43:51.824814 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 01:44:21 crc kubenswrapper[4990]: I1205 01:44:21.823568 4990 patch_prober.go:28] interesting pod/machine-config-daemon-zxlh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 01:44:21 crc kubenswrapper[4990]: I1205 01:44:21.824174 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 01:44:51 crc kubenswrapper[4990]: I1205 01:44:51.823790 4990 patch_prober.go:28] interesting pod/machine-config-daemon-zxlh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 01:44:51 crc kubenswrapper[4990]: I1205 01:44:51.824646 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 01:44:51 crc kubenswrapper[4990]: I1205 01:44:51.824755 4990 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" Dec 05 01:44:51 crc kubenswrapper[4990]: I1205 01:44:51.825850 4990 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"332c1c66c0c63ad31a8eb8eb91157daf8f3f3a515aaf78c4c65bb7a7320c8f26"} pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 01:44:51 crc kubenswrapper[4990]: I1205 01:44:51.826135 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" containerID="cri-o://332c1c66c0c63ad31a8eb8eb91157daf8f3f3a515aaf78c4c65bb7a7320c8f26" gracePeriod=600 Dec 05 01:44:52 crc kubenswrapper[4990]: I1205 01:44:52.502937 4990 generic.go:334] "Generic (PLEG): container finished" podID="b6580a04-67de-48f9-9da2-56cb4377af48" containerID="332c1c66c0c63ad31a8eb8eb91157daf8f3f3a515aaf78c4c65bb7a7320c8f26" exitCode=0 Dec 05 01:44:52 crc kubenswrapper[4990]: I1205 01:44:52.503016 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" event={"ID":"b6580a04-67de-48f9-9da2-56cb4377af48","Type":"ContainerDied","Data":"332c1c66c0c63ad31a8eb8eb91157daf8f3f3a515aaf78c4c65bb7a7320c8f26"} Dec 05 01:44:52 crc kubenswrapper[4990]: I1205 01:44:52.503330 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" event={"ID":"b6580a04-67de-48f9-9da2-56cb4377af48","Type":"ContainerStarted","Data":"80c7e485f9e3d44bd1a65b64ef3565f8b56151fc906368383324d650757be398"} Dec 05 01:44:52 crc kubenswrapper[4990]: I1205 01:44:52.503374 4990 scope.go:117] "RemoveContainer" containerID="2d4ea228591d9ae7abab2d6300a8c28ea985494d462aee9ebadaead9d728a86f" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.166207 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414985-8sv28"] Dec 05 01:45:00 crc kubenswrapper[4990]: E1205 01:45:00.169425 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="object-auditor" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.169813 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="object-auditor" Dec 05 01:45:00 crc kubenswrapper[4990]: E1205 01:45:00.170059 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cfb17a8-ecc2-4fa8-85e0-439d19b01b97" containerName="barbican-keystone-listener-log" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.170285 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cfb17a8-ecc2-4fa8-85e0-439d19b01b97" containerName="barbican-keystone-listener-log" Dec 05 01:45:00 crc kubenswrapper[4990]: E1205 01:45:00.170432 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbc42c3f-ae48-43a4-8f55-23efb52a86de" containerName="registry-server" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.170622 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbc42c3f-ae48-43a4-8f55-23efb52a86de" containerName="registry-server" Dec 05 01:45:00 crc kubenswrapper[4990]: E1205 01:45:00.170843 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="account-reaper" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.171053 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="account-reaper" Dec 05 01:45:00 crc kubenswrapper[4990]: E1205 01:45:00.171222 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="container-updater" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.171358 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="container-updater" Dec 05 01:45:00 crc kubenswrapper[4990]: E1205 01:45:00.171644 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d833c1a0-9e88-4ad3-8bcc-5904d459903a" containerName="ovs-vswitchd" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.171834 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="d833c1a0-9e88-4ad3-8bcc-5904d459903a" containerName="ovs-vswitchd" Dec 05 01:45:00 crc kubenswrapper[4990]: E1205 01:45:00.171996 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="container-auditor" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.172131 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="container-auditor" Dec 05 01:45:00 crc kubenswrapper[4990]: E1205 01:45:00.172294 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbc42c3f-ae48-43a4-8f55-23efb52a86de" containerName="extract-utilities" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.172450 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbc42c3f-ae48-43a4-8f55-23efb52a86de" containerName="extract-utilities" Dec 05 01:45:00 crc kubenswrapper[4990]: E1205 01:45:00.172684 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbc42c3f-ae48-43a4-8f55-23efb52a86de" containerName="extract-content" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.172883 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbc42c3f-ae48-43a4-8f55-23efb52a86de" containerName="extract-content" Dec 05 01:45:00 crc kubenswrapper[4990]: E1205 01:45:00.173054 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="rsync" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.173227 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="rsync" Dec 05 01:45:00 crc kubenswrapper[4990]: E1205 01:45:00.173392 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e94da38c-b2d3-4ddb-b032-a6e5bfa62145" containerName="barbican-worker-log" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.173608 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="e94da38c-b2d3-4ddb-b032-a6e5bfa62145" containerName="barbican-worker-log" Dec 05 01:45:00 crc kubenswrapper[4990]: E1205 01:45:00.173795 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="object-server" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.173982 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="object-server" Dec 05 01:45:00 crc kubenswrapper[4990]: E1205 01:45:00.174200 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="account-replicator" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.174415 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="account-replicator" Dec 05 01:45:00 crc kubenswrapper[4990]: E1205 01:45:00.174674 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="object-replicator" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.174862 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="object-replicator" Dec 05 01:45:00 crc kubenswrapper[4990]: E1205 01:45:00.175094 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="container-server" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.175297 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="container-server" Dec 05 01:45:00 crc kubenswrapper[4990]: E1205 01:45:00.175544 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e94da38c-b2d3-4ddb-b032-a6e5bfa62145" containerName="barbican-worker" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.175720 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="e94da38c-b2d3-4ddb-b032-a6e5bfa62145" containerName="barbican-worker" Dec 05 01:45:00 crc kubenswrapper[4990]: E1205 01:45:00.175908 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="account-auditor" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.176075 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="account-auditor" Dec 05 01:45:00 crc kubenswrapper[4990]: E1205 01:45:00.176407 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="object-updater" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.176566 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="object-updater" Dec 05 01:45:00 crc kubenswrapper[4990]: E1205 01:45:00.176707 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d833c1a0-9e88-4ad3-8bcc-5904d459903a" containerName="ovsdb-server" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.176838 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="d833c1a0-9e88-4ad3-8bcc-5904d459903a" containerName="ovsdb-server" Dec 05 01:45:00 crc kubenswrapper[4990]: E1205 01:45:00.176966 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cfb17a8-ecc2-4fa8-85e0-439d19b01b97" containerName="barbican-keystone-listener" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.177183 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cfb17a8-ecc2-4fa8-85e0-439d19b01b97" containerName="barbican-keystone-listener" Dec 05 01:45:00 crc kubenswrapper[4990]: E1205 01:45:00.177357 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="swift-recon-cron" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.177555 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="swift-recon-cron" Dec 05 01:45:00 crc kubenswrapper[4990]: E1205 01:45:00.177803 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="container-replicator" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.178004 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="container-replicator" Dec 05 01:45:00 crc kubenswrapper[4990]: E1205 01:45:00.178177 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="account-server" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.178316 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="account-server" Dec 05 01:45:00 crc kubenswrapper[4990]: E1205 01:45:00.178470 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="object-expirer" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.178643 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="object-expirer" Dec 05 01:45:00 crc kubenswrapper[4990]: E1205 01:45:00.178776 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d833c1a0-9e88-4ad3-8bcc-5904d459903a" containerName="ovsdb-server-init" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.178898 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="d833c1a0-9e88-4ad3-8bcc-5904d459903a" containerName="ovsdb-server-init" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.179382 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cfb17a8-ecc2-4fa8-85e0-439d19b01b97" containerName="barbican-keystone-listener" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.179665 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="d833c1a0-9e88-4ad3-8bcc-5904d459903a" containerName="ovsdb-server" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.179826 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="object-auditor" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.179995 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cfb17a8-ecc2-4fa8-85e0-439d19b01b97" containerName="barbican-keystone-listener-log" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.180132 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="swift-recon-cron" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.180270 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbc42c3f-ae48-43a4-8f55-23efb52a86de" containerName="registry-server" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.180408 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="object-updater" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.180577 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="container-replicator" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.180727 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="container-auditor" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.180871 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="account-reaper" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.181012 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="account-server" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.181179 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="object-server" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.181354 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="d833c1a0-9e88-4ad3-8bcc-5904d459903a" containerName="ovs-vswitchd" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.181537 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="e94da38c-b2d3-4ddb-b032-a6e5bfa62145" containerName="barbican-worker-log" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.181723 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="container-updater" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.181897 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="e94da38c-b2d3-4ddb-b032-a6e5bfa62145" containerName="barbican-worker" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.182007 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="account-replicator" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.182094 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="rsync" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.182180 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="object-expirer" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.182278 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="account-auditor" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.182433 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="object-replicator" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.182567 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3f05c11-ed0c-4e8d-bd22-05c8787cbcc3" containerName="container-server" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.183250 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414985-8sv28" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.188184 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.188213 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.189002 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414985-8sv28"] Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.276288 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4fa1d57f-827d-43cb-91d6-5af20245b900-secret-volume\") pod \"collect-profiles-29414985-8sv28\" (UID: \"4fa1d57f-827d-43cb-91d6-5af20245b900\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414985-8sv28" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.276359 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxnc6\" (UniqueName: \"kubernetes.io/projected/4fa1d57f-827d-43cb-91d6-5af20245b900-kube-api-access-lxnc6\") pod \"collect-profiles-29414985-8sv28\" (UID: \"4fa1d57f-827d-43cb-91d6-5af20245b900\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414985-8sv28" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.276450 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4fa1d57f-827d-43cb-91d6-5af20245b900-config-volume\") pod \"collect-profiles-29414985-8sv28\" (UID: \"4fa1d57f-827d-43cb-91d6-5af20245b900\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414985-8sv28" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.377598 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4fa1d57f-827d-43cb-91d6-5af20245b900-config-volume\") pod \"collect-profiles-29414985-8sv28\" (UID: \"4fa1d57f-827d-43cb-91d6-5af20245b900\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414985-8sv28" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.378051 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4fa1d57f-827d-43cb-91d6-5af20245b900-secret-volume\") pod \"collect-profiles-29414985-8sv28\" (UID: \"4fa1d57f-827d-43cb-91d6-5af20245b900\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414985-8sv28" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.378085 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxnc6\" (UniqueName: \"kubernetes.io/projected/4fa1d57f-827d-43cb-91d6-5af20245b900-kube-api-access-lxnc6\") pod \"collect-profiles-29414985-8sv28\" (UID: \"4fa1d57f-827d-43cb-91d6-5af20245b900\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414985-8sv28" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.379105 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4fa1d57f-827d-43cb-91d6-5af20245b900-config-volume\") pod \"collect-profiles-29414985-8sv28\" (UID: \"4fa1d57f-827d-43cb-91d6-5af20245b900\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414985-8sv28" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.385632 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4fa1d57f-827d-43cb-91d6-5af20245b900-secret-volume\") pod \"collect-profiles-29414985-8sv28\" (UID: \"4fa1d57f-827d-43cb-91d6-5af20245b900\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414985-8sv28" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.408612 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxnc6\" (UniqueName: \"kubernetes.io/projected/4fa1d57f-827d-43cb-91d6-5af20245b900-kube-api-access-lxnc6\") pod \"collect-profiles-29414985-8sv28\" (UID: \"4fa1d57f-827d-43cb-91d6-5af20245b900\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414985-8sv28" Dec 05 01:45:00 crc kubenswrapper[4990]: I1205 01:45:00.516939 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414985-8sv28" Dec 05 01:45:01 crc kubenswrapper[4990]: I1205 01:45:01.022854 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414985-8sv28"] Dec 05 01:45:01 crc kubenswrapper[4990]: I1205 01:45:01.595226 4990 generic.go:334] "Generic (PLEG): container finished" podID="4fa1d57f-827d-43cb-91d6-5af20245b900" containerID="e7deb4b0c6616cc326f6b11a427da5e2b665786bcd372e179b3ff79f441cfcca" exitCode=0 Dec 05 01:45:01 crc kubenswrapper[4990]: I1205 01:45:01.595419 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414985-8sv28" event={"ID":"4fa1d57f-827d-43cb-91d6-5af20245b900","Type":"ContainerDied","Data":"e7deb4b0c6616cc326f6b11a427da5e2b665786bcd372e179b3ff79f441cfcca"} Dec 05 01:45:01 crc kubenswrapper[4990]: I1205 01:45:01.595674 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414985-8sv28" event={"ID":"4fa1d57f-827d-43cb-91d6-5af20245b900","Type":"ContainerStarted","Data":"3fa701624bb33f118a04299fd28b0868add51844a61a972200c6de5b9bbeacda"} Dec 05 01:45:02 crc kubenswrapper[4990]: I1205 01:45:02.908395 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kkhq4"] Dec 05 01:45:02 crc kubenswrapper[4990]: I1205 01:45:02.910422 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kkhq4" Dec 05 01:45:02 crc kubenswrapper[4990]: I1205 01:45:02.926266 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kkhq4"] Dec 05 01:45:02 crc kubenswrapper[4990]: I1205 01:45:02.992935 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414985-8sv28" Dec 05 01:45:03 crc kubenswrapper[4990]: I1205 01:45:03.021723 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fq6b\" (UniqueName: \"kubernetes.io/projected/3abfe8a3-7dcb-401a-8a22-e318d04660c7-kube-api-access-6fq6b\") pod \"redhat-operators-kkhq4\" (UID: \"3abfe8a3-7dcb-401a-8a22-e318d04660c7\") " pod="openshift-marketplace/redhat-operators-kkhq4" Dec 05 01:45:03 crc kubenswrapper[4990]: I1205 01:45:03.021794 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3abfe8a3-7dcb-401a-8a22-e318d04660c7-catalog-content\") pod \"redhat-operators-kkhq4\" (UID: \"3abfe8a3-7dcb-401a-8a22-e318d04660c7\") " pod="openshift-marketplace/redhat-operators-kkhq4" Dec 05 01:45:03 crc kubenswrapper[4990]: I1205 01:45:03.021870 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3abfe8a3-7dcb-401a-8a22-e318d04660c7-utilities\") pod \"redhat-operators-kkhq4\" (UID: \"3abfe8a3-7dcb-401a-8a22-e318d04660c7\") " pod="openshift-marketplace/redhat-operators-kkhq4" Dec 05 01:45:03 crc kubenswrapper[4990]: I1205 01:45:03.123079 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxnc6\" (UniqueName: \"kubernetes.io/projected/4fa1d57f-827d-43cb-91d6-5af20245b900-kube-api-access-lxnc6\") pod \"4fa1d57f-827d-43cb-91d6-5af20245b900\" (UID: \"4fa1d57f-827d-43cb-91d6-5af20245b900\") " Dec 05 01:45:03 crc kubenswrapper[4990]: I1205 01:45:03.123219 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4fa1d57f-827d-43cb-91d6-5af20245b900-config-volume\") pod \"4fa1d57f-827d-43cb-91d6-5af20245b900\" (UID: \"4fa1d57f-827d-43cb-91d6-5af20245b900\") " Dec 05 01:45:03 crc kubenswrapper[4990]: I1205 01:45:03.123338 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4fa1d57f-827d-43cb-91d6-5af20245b900-secret-volume\") pod \"4fa1d57f-827d-43cb-91d6-5af20245b900\" (UID: \"4fa1d57f-827d-43cb-91d6-5af20245b900\") " Dec 05 01:45:03 crc kubenswrapper[4990]: I1205 01:45:03.123623 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fq6b\" (UniqueName: \"kubernetes.io/projected/3abfe8a3-7dcb-401a-8a22-e318d04660c7-kube-api-access-6fq6b\") pod \"redhat-operators-kkhq4\" (UID: \"3abfe8a3-7dcb-401a-8a22-e318d04660c7\") " pod="openshift-marketplace/redhat-operators-kkhq4" Dec 05 01:45:03 crc kubenswrapper[4990]: I1205 01:45:03.123708 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3abfe8a3-7dcb-401a-8a22-e318d04660c7-catalog-content\") pod \"redhat-operators-kkhq4\" (UID: \"3abfe8a3-7dcb-401a-8a22-e318d04660c7\") " pod="openshift-marketplace/redhat-operators-kkhq4" Dec 05 01:45:03 crc kubenswrapper[4990]: I1205 01:45:03.123791 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3abfe8a3-7dcb-401a-8a22-e318d04660c7-utilities\") pod \"redhat-operators-kkhq4\" (UID: \"3abfe8a3-7dcb-401a-8a22-e318d04660c7\") " pod="openshift-marketplace/redhat-operators-kkhq4" Dec 05 01:45:03 crc kubenswrapper[4990]: I1205 01:45:03.123957 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fa1d57f-827d-43cb-91d6-5af20245b900-config-volume" (OuterVolumeSpecName: "config-volume") pod "4fa1d57f-827d-43cb-91d6-5af20245b900" (UID: "4fa1d57f-827d-43cb-91d6-5af20245b900"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 01:45:03 crc kubenswrapper[4990]: I1205 01:45:03.124559 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3abfe8a3-7dcb-401a-8a22-e318d04660c7-catalog-content\") pod \"redhat-operators-kkhq4\" (UID: \"3abfe8a3-7dcb-401a-8a22-e318d04660c7\") " pod="openshift-marketplace/redhat-operators-kkhq4" Dec 05 01:45:03 crc kubenswrapper[4990]: I1205 01:45:03.124568 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3abfe8a3-7dcb-401a-8a22-e318d04660c7-utilities\") pod \"redhat-operators-kkhq4\" (UID: \"3abfe8a3-7dcb-401a-8a22-e318d04660c7\") " pod="openshift-marketplace/redhat-operators-kkhq4" Dec 05 01:45:03 crc kubenswrapper[4990]: I1205 01:45:03.128880 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fa1d57f-827d-43cb-91d6-5af20245b900-kube-api-access-lxnc6" (OuterVolumeSpecName: "kube-api-access-lxnc6") pod "4fa1d57f-827d-43cb-91d6-5af20245b900" (UID: "4fa1d57f-827d-43cb-91d6-5af20245b900"). InnerVolumeSpecName "kube-api-access-lxnc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:45:03 crc kubenswrapper[4990]: I1205 01:45:03.134588 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fa1d57f-827d-43cb-91d6-5af20245b900-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4fa1d57f-827d-43cb-91d6-5af20245b900" (UID: "4fa1d57f-827d-43cb-91d6-5af20245b900"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 01:45:03 crc kubenswrapper[4990]: I1205 01:45:03.141458 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fq6b\" (UniqueName: \"kubernetes.io/projected/3abfe8a3-7dcb-401a-8a22-e318d04660c7-kube-api-access-6fq6b\") pod \"redhat-operators-kkhq4\" (UID: \"3abfe8a3-7dcb-401a-8a22-e318d04660c7\") " pod="openshift-marketplace/redhat-operators-kkhq4" Dec 05 01:45:03 crc kubenswrapper[4990]: I1205 01:45:03.224903 4990 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4fa1d57f-827d-43cb-91d6-5af20245b900-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 01:45:03 crc kubenswrapper[4990]: I1205 01:45:03.224951 4990 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4fa1d57f-827d-43cb-91d6-5af20245b900-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 01:45:03 crc kubenswrapper[4990]: I1205 01:45:03.224988 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxnc6\" (UniqueName: \"kubernetes.io/projected/4fa1d57f-827d-43cb-91d6-5af20245b900-kube-api-access-lxnc6\") on node \"crc\" DevicePath \"\"" Dec 05 01:45:03 crc kubenswrapper[4990]: I1205 01:45:03.226375 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kkhq4" Dec 05 01:45:03 crc kubenswrapper[4990]: I1205 01:45:03.609715 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414985-8sv28" event={"ID":"4fa1d57f-827d-43cb-91d6-5af20245b900","Type":"ContainerDied","Data":"3fa701624bb33f118a04299fd28b0868add51844a61a972200c6de5b9bbeacda"} Dec 05 01:45:03 crc kubenswrapper[4990]: I1205 01:45:03.610116 4990 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fa701624bb33f118a04299fd28b0868add51844a61a972200c6de5b9bbeacda" Dec 05 01:45:03 crc kubenswrapper[4990]: I1205 01:45:03.609772 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414985-8sv28" Dec 05 01:45:04 crc kubenswrapper[4990]: I1205 01:45:04.071075 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414940-6mmxx"] Dec 05 01:45:04 crc kubenswrapper[4990]: I1205 01:45:04.084097 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414940-6mmxx"] Dec 05 01:45:04 crc kubenswrapper[4990]: I1205 01:45:04.224066 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kkhq4"] Dec 05 01:45:04 crc kubenswrapper[4990]: W1205 01:45:04.233696 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3abfe8a3_7dcb_401a_8a22_e318d04660c7.slice/crio-06f07ff6b05a581b7eda3973e9e176549e8df21bb961fbaf849adc4a985578c5 WatchSource:0}: Error finding container 06f07ff6b05a581b7eda3973e9e176549e8df21bb961fbaf849adc4a985578c5: Status 404 returned error can't find the container with id 06f07ff6b05a581b7eda3973e9e176549e8df21bb961fbaf849adc4a985578c5 Dec 05 01:45:04 crc kubenswrapper[4990]: I1205 01:45:04.620141 4990 generic.go:334] "Generic (PLEG): container finished" podID="3abfe8a3-7dcb-401a-8a22-e318d04660c7" containerID="da7b8ea3ee92e8864e13621a0a0f2ac471cce7d0a7349bd630eb156c4954c1c6" exitCode=0 Dec 05 01:45:04 crc kubenswrapper[4990]: I1205 01:45:04.620191 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kkhq4" event={"ID":"3abfe8a3-7dcb-401a-8a22-e318d04660c7","Type":"ContainerDied","Data":"da7b8ea3ee92e8864e13621a0a0f2ac471cce7d0a7349bd630eb156c4954c1c6"} Dec 05 01:45:04 crc kubenswrapper[4990]: I1205 01:45:04.620263 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kkhq4" event={"ID":"3abfe8a3-7dcb-401a-8a22-e318d04660c7","Type":"ContainerStarted","Data":"06f07ff6b05a581b7eda3973e9e176549e8df21bb961fbaf849adc4a985578c5"} Dec 05 01:45:04 crc kubenswrapper[4990]: I1205 01:45:04.623198 4990 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 01:45:05 crc kubenswrapper[4990]: I1205 01:45:05.632151 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kkhq4" event={"ID":"3abfe8a3-7dcb-401a-8a22-e318d04660c7","Type":"ContainerStarted","Data":"adbecb66e6b6fbb0222bd82db94b3dd65d0c298b7fed055676558f33a27d5c64"} Dec 05 01:45:05 crc kubenswrapper[4990]: I1205 01:45:05.942261 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3b21d39-3456-4a12-a91b-459864e74087" path="/var/lib/kubelet/pods/e3b21d39-3456-4a12-a91b-459864e74087/volumes" Dec 05 01:45:06 crc kubenswrapper[4990]: I1205 01:45:06.644953 4990 generic.go:334] "Generic (PLEG): container finished" podID="3abfe8a3-7dcb-401a-8a22-e318d04660c7" containerID="adbecb66e6b6fbb0222bd82db94b3dd65d0c298b7fed055676558f33a27d5c64" exitCode=0 Dec 05 01:45:06 crc kubenswrapper[4990]: I1205 01:45:06.645002 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kkhq4" event={"ID":"3abfe8a3-7dcb-401a-8a22-e318d04660c7","Type":"ContainerDied","Data":"adbecb66e6b6fbb0222bd82db94b3dd65d0c298b7fed055676558f33a27d5c64"} Dec 05 01:45:07 crc kubenswrapper[4990]: I1205 01:45:07.654716 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kkhq4" event={"ID":"3abfe8a3-7dcb-401a-8a22-e318d04660c7","Type":"ContainerStarted","Data":"d71f7ab74b253ee829763f3ef3fae9ba6078ae8edf26ba1fec4ce844043b5985"} Dec 05 01:45:07 crc kubenswrapper[4990]: I1205 01:45:07.678844 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kkhq4" podStartSLOduration=3.252734071 podStartE2EDuration="5.678821082s" podCreationTimestamp="2025-12-05 01:45:02 +0000 UTC" firstStartedPulling="2025-12-05 01:45:04.622758074 +0000 UTC m=+2202.998973475" lastFinishedPulling="2025-12-05 01:45:07.048845085 +0000 UTC m=+2205.425060486" observedRunningTime="2025-12-05 01:45:07.676123436 +0000 UTC m=+2206.052338807" watchObservedRunningTime="2025-12-05 01:45:07.678821082 +0000 UTC m=+2206.055036483" Dec 05 01:45:13 crc kubenswrapper[4990]: I1205 01:45:13.226661 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kkhq4" Dec 05 01:45:13 crc kubenswrapper[4990]: I1205 01:45:13.228547 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kkhq4" Dec 05 01:45:14 crc kubenswrapper[4990]: I1205 01:45:14.283289 4990 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kkhq4" podUID="3abfe8a3-7dcb-401a-8a22-e318d04660c7" containerName="registry-server" probeResult="failure" output=< Dec 05 01:45:14 crc kubenswrapper[4990]: timeout: failed to connect service ":50051" within 1s Dec 05 01:45:14 crc kubenswrapper[4990]: > Dec 05 01:45:23 crc kubenswrapper[4990]: I1205 01:45:23.315847 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kkhq4" Dec 05 01:45:23 crc kubenswrapper[4990]: I1205 01:45:23.469983 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kkhq4" Dec 05 01:45:23 crc kubenswrapper[4990]: I1205 01:45:23.575570 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kkhq4"] Dec 05 01:45:24 crc kubenswrapper[4990]: I1205 01:45:24.809451 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kkhq4" podUID="3abfe8a3-7dcb-401a-8a22-e318d04660c7" containerName="registry-server" containerID="cri-o://d71f7ab74b253ee829763f3ef3fae9ba6078ae8edf26ba1fec4ce844043b5985" gracePeriod=2 Dec 05 01:45:25 crc kubenswrapper[4990]: I1205 01:45:25.258253 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kkhq4" Dec 05 01:45:25 crc kubenswrapper[4990]: I1205 01:45:25.359876 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3abfe8a3-7dcb-401a-8a22-e318d04660c7-catalog-content\") pod \"3abfe8a3-7dcb-401a-8a22-e318d04660c7\" (UID: \"3abfe8a3-7dcb-401a-8a22-e318d04660c7\") " Dec 05 01:45:25 crc kubenswrapper[4990]: I1205 01:45:25.360115 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3abfe8a3-7dcb-401a-8a22-e318d04660c7-utilities\") pod \"3abfe8a3-7dcb-401a-8a22-e318d04660c7\" (UID: \"3abfe8a3-7dcb-401a-8a22-e318d04660c7\") " Dec 05 01:45:25 crc kubenswrapper[4990]: I1205 01:45:25.360203 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fq6b\" (UniqueName: \"kubernetes.io/projected/3abfe8a3-7dcb-401a-8a22-e318d04660c7-kube-api-access-6fq6b\") pod \"3abfe8a3-7dcb-401a-8a22-e318d04660c7\" (UID: \"3abfe8a3-7dcb-401a-8a22-e318d04660c7\") " Dec 05 01:45:25 crc kubenswrapper[4990]: I1205 01:45:25.361151 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3abfe8a3-7dcb-401a-8a22-e318d04660c7-utilities" (OuterVolumeSpecName: "utilities") pod "3abfe8a3-7dcb-401a-8a22-e318d04660c7" (UID: "3abfe8a3-7dcb-401a-8a22-e318d04660c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:45:25 crc kubenswrapper[4990]: I1205 01:45:25.370609 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3abfe8a3-7dcb-401a-8a22-e318d04660c7-kube-api-access-6fq6b" (OuterVolumeSpecName: "kube-api-access-6fq6b") pod "3abfe8a3-7dcb-401a-8a22-e318d04660c7" (UID: "3abfe8a3-7dcb-401a-8a22-e318d04660c7"). InnerVolumeSpecName "kube-api-access-6fq6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:45:25 crc kubenswrapper[4990]: I1205 01:45:25.462873 4990 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3abfe8a3-7dcb-401a-8a22-e318d04660c7-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 01:45:25 crc kubenswrapper[4990]: I1205 01:45:25.462927 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fq6b\" (UniqueName: \"kubernetes.io/projected/3abfe8a3-7dcb-401a-8a22-e318d04660c7-kube-api-access-6fq6b\") on node \"crc\" DevicePath \"\"" Dec 05 01:45:25 crc kubenswrapper[4990]: I1205 01:45:25.510785 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3abfe8a3-7dcb-401a-8a22-e318d04660c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3abfe8a3-7dcb-401a-8a22-e318d04660c7" (UID: "3abfe8a3-7dcb-401a-8a22-e318d04660c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:45:25 crc kubenswrapper[4990]: I1205 01:45:25.564860 4990 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3abfe8a3-7dcb-401a-8a22-e318d04660c7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 01:45:25 crc kubenswrapper[4990]: I1205 01:45:25.820471 4990 generic.go:334] "Generic (PLEG): container finished" podID="3abfe8a3-7dcb-401a-8a22-e318d04660c7" containerID="d71f7ab74b253ee829763f3ef3fae9ba6078ae8edf26ba1fec4ce844043b5985" exitCode=0 Dec 05 01:45:25 crc kubenswrapper[4990]: I1205 01:45:25.820573 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kkhq4" event={"ID":"3abfe8a3-7dcb-401a-8a22-e318d04660c7","Type":"ContainerDied","Data":"d71f7ab74b253ee829763f3ef3fae9ba6078ae8edf26ba1fec4ce844043b5985"} Dec 05 01:45:25 crc kubenswrapper[4990]: I1205 01:45:25.820605 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kkhq4" Dec 05 01:45:25 crc kubenswrapper[4990]: I1205 01:45:25.820625 4990 scope.go:117] "RemoveContainer" containerID="d71f7ab74b253ee829763f3ef3fae9ba6078ae8edf26ba1fec4ce844043b5985" Dec 05 01:45:25 crc kubenswrapper[4990]: I1205 01:45:25.820610 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kkhq4" event={"ID":"3abfe8a3-7dcb-401a-8a22-e318d04660c7","Type":"ContainerDied","Data":"06f07ff6b05a581b7eda3973e9e176549e8df21bb961fbaf849adc4a985578c5"} Dec 05 01:45:25 crc kubenswrapper[4990]: I1205 01:45:25.853049 4990 scope.go:117] "RemoveContainer" containerID="adbecb66e6b6fbb0222bd82db94b3dd65d0c298b7fed055676558f33a27d5c64" Dec 05 01:45:25 crc kubenswrapper[4990]: I1205 01:45:25.869461 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kkhq4"] Dec 05 01:45:25 crc kubenswrapper[4990]: I1205 01:45:25.876222 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kkhq4"] Dec 05 01:45:25 crc kubenswrapper[4990]: I1205 01:45:25.892670 4990 scope.go:117] "RemoveContainer" containerID="da7b8ea3ee92e8864e13621a0a0f2ac471cce7d0a7349bd630eb156c4954c1c6" Dec 05 01:45:25 crc kubenswrapper[4990]: I1205 01:45:25.916292 4990 scope.go:117] "RemoveContainer" containerID="d71f7ab74b253ee829763f3ef3fae9ba6078ae8edf26ba1fec4ce844043b5985" Dec 05 01:45:25 crc kubenswrapper[4990]: E1205 01:45:25.916767 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d71f7ab74b253ee829763f3ef3fae9ba6078ae8edf26ba1fec4ce844043b5985\": container with ID starting with d71f7ab74b253ee829763f3ef3fae9ba6078ae8edf26ba1fec4ce844043b5985 not found: ID does not exist" containerID="d71f7ab74b253ee829763f3ef3fae9ba6078ae8edf26ba1fec4ce844043b5985" Dec 05 01:45:25 crc kubenswrapper[4990]: I1205 01:45:25.916802 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d71f7ab74b253ee829763f3ef3fae9ba6078ae8edf26ba1fec4ce844043b5985"} err="failed to get container status \"d71f7ab74b253ee829763f3ef3fae9ba6078ae8edf26ba1fec4ce844043b5985\": rpc error: code = NotFound desc = could not find container \"d71f7ab74b253ee829763f3ef3fae9ba6078ae8edf26ba1fec4ce844043b5985\": container with ID starting with d71f7ab74b253ee829763f3ef3fae9ba6078ae8edf26ba1fec4ce844043b5985 not found: ID does not exist" Dec 05 01:45:25 crc kubenswrapper[4990]: I1205 01:45:25.916830 4990 scope.go:117] "RemoveContainer" containerID="adbecb66e6b6fbb0222bd82db94b3dd65d0c298b7fed055676558f33a27d5c64" Dec 05 01:45:25 crc kubenswrapper[4990]: E1205 01:45:25.917144 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adbecb66e6b6fbb0222bd82db94b3dd65d0c298b7fed055676558f33a27d5c64\": container with ID starting with adbecb66e6b6fbb0222bd82db94b3dd65d0c298b7fed055676558f33a27d5c64 not found: ID does not exist" containerID="adbecb66e6b6fbb0222bd82db94b3dd65d0c298b7fed055676558f33a27d5c64" Dec 05 01:45:25 crc kubenswrapper[4990]: I1205 01:45:25.917166 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adbecb66e6b6fbb0222bd82db94b3dd65d0c298b7fed055676558f33a27d5c64"} err="failed to get container status \"adbecb66e6b6fbb0222bd82db94b3dd65d0c298b7fed055676558f33a27d5c64\": rpc error: code = NotFound desc = could not find container \"adbecb66e6b6fbb0222bd82db94b3dd65d0c298b7fed055676558f33a27d5c64\": container with ID starting with adbecb66e6b6fbb0222bd82db94b3dd65d0c298b7fed055676558f33a27d5c64 not found: ID does not exist" Dec 05 01:45:25 crc kubenswrapper[4990]: I1205 01:45:25.917184 4990 scope.go:117] "RemoveContainer" containerID="da7b8ea3ee92e8864e13621a0a0f2ac471cce7d0a7349bd630eb156c4954c1c6" Dec 05 01:45:25 crc kubenswrapper[4990]: E1205 01:45:25.918733 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da7b8ea3ee92e8864e13621a0a0f2ac471cce7d0a7349bd630eb156c4954c1c6\": container with ID starting with da7b8ea3ee92e8864e13621a0a0f2ac471cce7d0a7349bd630eb156c4954c1c6 not found: ID does not exist" containerID="da7b8ea3ee92e8864e13621a0a0f2ac471cce7d0a7349bd630eb156c4954c1c6" Dec 05 01:45:25 crc kubenswrapper[4990]: I1205 01:45:25.918787 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da7b8ea3ee92e8864e13621a0a0f2ac471cce7d0a7349bd630eb156c4954c1c6"} err="failed to get container status \"da7b8ea3ee92e8864e13621a0a0f2ac471cce7d0a7349bd630eb156c4954c1c6\": rpc error: code = NotFound desc = could not find container \"da7b8ea3ee92e8864e13621a0a0f2ac471cce7d0a7349bd630eb156c4954c1c6\": container with ID starting with da7b8ea3ee92e8864e13621a0a0f2ac471cce7d0a7349bd630eb156c4954c1c6 not found: ID does not exist" Dec 05 01:45:25 crc kubenswrapper[4990]: I1205 01:45:25.939424 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3abfe8a3-7dcb-401a-8a22-e318d04660c7" path="/var/lib/kubelet/pods/3abfe8a3-7dcb-401a-8a22-e318d04660c7/volumes" Dec 05 01:45:30 crc kubenswrapper[4990]: I1205 01:45:30.736327 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dhkq4"] Dec 05 01:45:30 crc kubenswrapper[4990]: E1205 01:45:30.737499 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3abfe8a3-7dcb-401a-8a22-e318d04660c7" containerName="registry-server" Dec 05 01:45:30 crc kubenswrapper[4990]: I1205 01:45:30.737514 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="3abfe8a3-7dcb-401a-8a22-e318d04660c7" containerName="registry-server" Dec 05 01:45:30 crc kubenswrapper[4990]: E1205 01:45:30.737553 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3abfe8a3-7dcb-401a-8a22-e318d04660c7" containerName="extract-utilities" Dec 05 01:45:30 crc kubenswrapper[4990]: I1205 01:45:30.737562 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="3abfe8a3-7dcb-401a-8a22-e318d04660c7" containerName="extract-utilities" Dec 05 01:45:30 crc kubenswrapper[4990]: E1205 01:45:30.737586 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3abfe8a3-7dcb-401a-8a22-e318d04660c7" containerName="extract-content" Dec 05 01:45:30 crc kubenswrapper[4990]: I1205 01:45:30.737596 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="3abfe8a3-7dcb-401a-8a22-e318d04660c7" containerName="extract-content" Dec 05 01:45:30 crc kubenswrapper[4990]: E1205 01:45:30.737639 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa1d57f-827d-43cb-91d6-5af20245b900" containerName="collect-profiles" Dec 05 01:45:30 crc kubenswrapper[4990]: I1205 01:45:30.737647 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa1d57f-827d-43cb-91d6-5af20245b900" containerName="collect-profiles" Dec 05 01:45:30 crc kubenswrapper[4990]: I1205 01:45:30.738045 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fa1d57f-827d-43cb-91d6-5af20245b900" containerName="collect-profiles" Dec 05 01:45:30 crc kubenswrapper[4990]: I1205 01:45:30.738068 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="3abfe8a3-7dcb-401a-8a22-e318d04660c7" containerName="registry-server" Dec 05 01:45:30 crc kubenswrapper[4990]: I1205 01:45:30.740889 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dhkq4" Dec 05 01:45:30 crc kubenswrapper[4990]: I1205 01:45:30.775940 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dhkq4"] Dec 05 01:45:30 crc kubenswrapper[4990]: I1205 01:45:30.856540 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca710615-e5d1-4669-9d6c-6e6d9bc118f3-catalog-content\") pod \"community-operators-dhkq4\" (UID: \"ca710615-e5d1-4669-9d6c-6e6d9bc118f3\") " pod="openshift-marketplace/community-operators-dhkq4" Dec 05 01:45:30 crc kubenswrapper[4990]: I1205 01:45:30.856639 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca710615-e5d1-4669-9d6c-6e6d9bc118f3-utilities\") pod \"community-operators-dhkq4\" (UID: \"ca710615-e5d1-4669-9d6c-6e6d9bc118f3\") " pod="openshift-marketplace/community-operators-dhkq4" Dec 05 01:45:30 crc kubenswrapper[4990]: I1205 01:45:30.856676 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wbxd\" (UniqueName: \"kubernetes.io/projected/ca710615-e5d1-4669-9d6c-6e6d9bc118f3-kube-api-access-2wbxd\") pod \"community-operators-dhkq4\" (UID: \"ca710615-e5d1-4669-9d6c-6e6d9bc118f3\") " pod="openshift-marketplace/community-operators-dhkq4" Dec 05 01:45:30 crc kubenswrapper[4990]: I1205 01:45:30.957939 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca710615-e5d1-4669-9d6c-6e6d9bc118f3-utilities\") pod \"community-operators-dhkq4\" (UID: \"ca710615-e5d1-4669-9d6c-6e6d9bc118f3\") " pod="openshift-marketplace/community-operators-dhkq4" Dec 05 01:45:30 crc kubenswrapper[4990]: I1205 01:45:30.957984 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wbxd\" (UniqueName: \"kubernetes.io/projected/ca710615-e5d1-4669-9d6c-6e6d9bc118f3-kube-api-access-2wbxd\") pod \"community-operators-dhkq4\" (UID: \"ca710615-e5d1-4669-9d6c-6e6d9bc118f3\") " pod="openshift-marketplace/community-operators-dhkq4" Dec 05 01:45:30 crc kubenswrapper[4990]: I1205 01:45:30.958052 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca710615-e5d1-4669-9d6c-6e6d9bc118f3-catalog-content\") pod \"community-operators-dhkq4\" (UID: \"ca710615-e5d1-4669-9d6c-6e6d9bc118f3\") " pod="openshift-marketplace/community-operators-dhkq4" Dec 05 01:45:30 crc kubenswrapper[4990]: I1205 01:45:30.958444 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca710615-e5d1-4669-9d6c-6e6d9bc118f3-catalog-content\") pod \"community-operators-dhkq4\" (UID: \"ca710615-e5d1-4669-9d6c-6e6d9bc118f3\") " pod="openshift-marketplace/community-operators-dhkq4" Dec 05 01:45:30 crc kubenswrapper[4990]: I1205 01:45:30.959002 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca710615-e5d1-4669-9d6c-6e6d9bc118f3-utilities\") pod \"community-operators-dhkq4\" (UID: \"ca710615-e5d1-4669-9d6c-6e6d9bc118f3\") " pod="openshift-marketplace/community-operators-dhkq4" Dec 05 01:45:30 crc kubenswrapper[4990]: I1205 01:45:30.978778 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wbxd\" (UniqueName: \"kubernetes.io/projected/ca710615-e5d1-4669-9d6c-6e6d9bc118f3-kube-api-access-2wbxd\") pod \"community-operators-dhkq4\" (UID: \"ca710615-e5d1-4669-9d6c-6e6d9bc118f3\") " pod="openshift-marketplace/community-operators-dhkq4" Dec 05 01:45:31 crc kubenswrapper[4990]: I1205 01:45:31.077248 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dhkq4" Dec 05 01:45:31 crc kubenswrapper[4990]: I1205 01:45:31.542329 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dhkq4"] Dec 05 01:45:31 crc kubenswrapper[4990]: I1205 01:45:31.873961 4990 generic.go:334] "Generic (PLEG): container finished" podID="ca710615-e5d1-4669-9d6c-6e6d9bc118f3" containerID="88b98b53e9ad62aaa280c9f84338db888ecbac273e4d012d647e78ca6078ceda" exitCode=0 Dec 05 01:45:31 crc kubenswrapper[4990]: I1205 01:45:31.874193 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhkq4" event={"ID":"ca710615-e5d1-4669-9d6c-6e6d9bc118f3","Type":"ContainerDied","Data":"88b98b53e9ad62aaa280c9f84338db888ecbac273e4d012d647e78ca6078ceda"} Dec 05 01:45:31 crc kubenswrapper[4990]: I1205 01:45:31.874290 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhkq4" event={"ID":"ca710615-e5d1-4669-9d6c-6e6d9bc118f3","Type":"ContainerStarted","Data":"3e4a9ff9e657252d0663b73cabbedd5ad41e5224aa1cb233b7f4e2afdccd27d0"} Dec 05 01:45:32 crc kubenswrapper[4990]: I1205 01:45:32.884338 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhkq4" event={"ID":"ca710615-e5d1-4669-9d6c-6e6d9bc118f3","Type":"ContainerStarted","Data":"c5a1d2bd558108d95b2a8793e1c5c2d5cc2a166b87f12b509581b19bcbc3a95c"} Dec 05 01:45:33 crc kubenswrapper[4990]: I1205 01:45:33.511071 4990 scope.go:117] "RemoveContainer" containerID="4afec6235374e956b244388a9a2e025fb6f35b3c77708fd1af1a0ddadb2003a4" Dec 05 01:45:33 crc kubenswrapper[4990]: I1205 01:45:33.898476 4990 generic.go:334] "Generic (PLEG): container finished" podID="ca710615-e5d1-4669-9d6c-6e6d9bc118f3" containerID="c5a1d2bd558108d95b2a8793e1c5c2d5cc2a166b87f12b509581b19bcbc3a95c" exitCode=0 Dec 05 01:45:33 crc kubenswrapper[4990]: I1205 01:45:33.898589 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhkq4" event={"ID":"ca710615-e5d1-4669-9d6c-6e6d9bc118f3","Type":"ContainerDied","Data":"c5a1d2bd558108d95b2a8793e1c5c2d5cc2a166b87f12b509581b19bcbc3a95c"} Dec 05 01:45:34 crc kubenswrapper[4990]: I1205 01:45:34.908348 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhkq4" event={"ID":"ca710615-e5d1-4669-9d6c-6e6d9bc118f3","Type":"ContainerStarted","Data":"53dd107563e4dcb3059bda1eccec4cf3eb12894a800e9f684220cea502f09ad0"} Dec 05 01:45:34 crc kubenswrapper[4990]: I1205 01:45:34.928086 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dhkq4" podStartSLOduration=2.465537862 podStartE2EDuration="4.928068727s" podCreationTimestamp="2025-12-05 01:45:30 +0000 UTC" firstStartedPulling="2025-12-05 01:45:31.875719973 +0000 UTC m=+2230.251935354" lastFinishedPulling="2025-12-05 01:45:34.338250848 +0000 UTC m=+2232.714466219" observedRunningTime="2025-12-05 01:45:34.924846336 +0000 UTC m=+2233.301061707" watchObservedRunningTime="2025-12-05 01:45:34.928068727 +0000 UTC m=+2233.304284088" Dec 05 01:45:41 crc kubenswrapper[4990]: I1205 01:45:41.082047 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dhkq4" Dec 05 01:45:41 crc kubenswrapper[4990]: I1205 01:45:41.082720 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dhkq4" Dec 05 01:45:41 crc kubenswrapper[4990]: I1205 01:45:41.131083 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dhkq4" Dec 05 01:45:42 crc kubenswrapper[4990]: I1205 01:45:42.030701 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dhkq4" Dec 05 01:45:42 crc kubenswrapper[4990]: I1205 01:45:42.091861 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dhkq4"] Dec 05 01:45:43 crc kubenswrapper[4990]: I1205 01:45:43.979095 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dhkq4" podUID="ca710615-e5d1-4669-9d6c-6e6d9bc118f3" containerName="registry-server" containerID="cri-o://53dd107563e4dcb3059bda1eccec4cf3eb12894a800e9f684220cea502f09ad0" gracePeriod=2 Dec 05 01:45:44 crc kubenswrapper[4990]: I1205 01:45:44.936943 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dhkq4" Dec 05 01:45:44 crc kubenswrapper[4990]: I1205 01:45:44.989401 4990 generic.go:334] "Generic (PLEG): container finished" podID="ca710615-e5d1-4669-9d6c-6e6d9bc118f3" containerID="53dd107563e4dcb3059bda1eccec4cf3eb12894a800e9f684220cea502f09ad0" exitCode=0 Dec 05 01:45:44 crc kubenswrapper[4990]: I1205 01:45:44.989460 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhkq4" event={"ID":"ca710615-e5d1-4669-9d6c-6e6d9bc118f3","Type":"ContainerDied","Data":"53dd107563e4dcb3059bda1eccec4cf3eb12894a800e9f684220cea502f09ad0"} Dec 05 01:45:44 crc kubenswrapper[4990]: I1205 01:45:44.989543 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhkq4" event={"ID":"ca710615-e5d1-4669-9d6c-6e6d9bc118f3","Type":"ContainerDied","Data":"3e4a9ff9e657252d0663b73cabbedd5ad41e5224aa1cb233b7f4e2afdccd27d0"} Dec 05 01:45:44 crc kubenswrapper[4990]: I1205 01:45:44.989575 4990 scope.go:117] "RemoveContainer" containerID="53dd107563e4dcb3059bda1eccec4cf3eb12894a800e9f684220cea502f09ad0" Dec 05 01:45:44 crc kubenswrapper[4990]: I1205 01:45:44.989747 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dhkq4" Dec 05 01:45:45 crc kubenswrapper[4990]: I1205 01:45:45.013200 4990 scope.go:117] "RemoveContainer" containerID="c5a1d2bd558108d95b2a8793e1c5c2d5cc2a166b87f12b509581b19bcbc3a95c" Dec 05 01:45:45 crc kubenswrapper[4990]: I1205 01:45:45.034888 4990 scope.go:117] "RemoveContainer" containerID="88b98b53e9ad62aaa280c9f84338db888ecbac273e4d012d647e78ca6078ceda" Dec 05 01:45:45 crc kubenswrapper[4990]: I1205 01:45:45.063107 4990 scope.go:117] "RemoveContainer" containerID="53dd107563e4dcb3059bda1eccec4cf3eb12894a800e9f684220cea502f09ad0" Dec 05 01:45:45 crc kubenswrapper[4990]: E1205 01:45:45.064274 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53dd107563e4dcb3059bda1eccec4cf3eb12894a800e9f684220cea502f09ad0\": container with ID starting with 53dd107563e4dcb3059bda1eccec4cf3eb12894a800e9f684220cea502f09ad0 not found: ID does not exist" containerID="53dd107563e4dcb3059bda1eccec4cf3eb12894a800e9f684220cea502f09ad0" Dec 05 01:45:45 crc kubenswrapper[4990]: I1205 01:45:45.064319 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53dd107563e4dcb3059bda1eccec4cf3eb12894a800e9f684220cea502f09ad0"} err="failed to get container status \"53dd107563e4dcb3059bda1eccec4cf3eb12894a800e9f684220cea502f09ad0\": rpc error: code = NotFound desc = could not find container \"53dd107563e4dcb3059bda1eccec4cf3eb12894a800e9f684220cea502f09ad0\": container with ID starting with 53dd107563e4dcb3059bda1eccec4cf3eb12894a800e9f684220cea502f09ad0 not found: ID does not exist" Dec 05 01:45:45 crc kubenswrapper[4990]: I1205 01:45:45.064351 4990 scope.go:117] "RemoveContainer" containerID="c5a1d2bd558108d95b2a8793e1c5c2d5cc2a166b87f12b509581b19bcbc3a95c" Dec 05 01:45:45 crc kubenswrapper[4990]: E1205 01:45:45.065206 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5a1d2bd558108d95b2a8793e1c5c2d5cc2a166b87f12b509581b19bcbc3a95c\": container with ID starting with c5a1d2bd558108d95b2a8793e1c5c2d5cc2a166b87f12b509581b19bcbc3a95c not found: ID does not exist" containerID="c5a1d2bd558108d95b2a8793e1c5c2d5cc2a166b87f12b509581b19bcbc3a95c" Dec 05 01:45:45 crc kubenswrapper[4990]: I1205 01:45:45.065231 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a1d2bd558108d95b2a8793e1c5c2d5cc2a166b87f12b509581b19bcbc3a95c"} err="failed to get container status \"c5a1d2bd558108d95b2a8793e1c5c2d5cc2a166b87f12b509581b19bcbc3a95c\": rpc error: code = NotFound desc = could not find container \"c5a1d2bd558108d95b2a8793e1c5c2d5cc2a166b87f12b509581b19bcbc3a95c\": container with ID starting with c5a1d2bd558108d95b2a8793e1c5c2d5cc2a166b87f12b509581b19bcbc3a95c not found: ID does not exist" Dec 05 01:45:45 crc kubenswrapper[4990]: I1205 01:45:45.065245 4990 scope.go:117] "RemoveContainer" containerID="88b98b53e9ad62aaa280c9f84338db888ecbac273e4d012d647e78ca6078ceda" Dec 05 01:45:45 crc kubenswrapper[4990]: E1205 01:45:45.065896 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88b98b53e9ad62aaa280c9f84338db888ecbac273e4d012d647e78ca6078ceda\": container with ID starting with 88b98b53e9ad62aaa280c9f84338db888ecbac273e4d012d647e78ca6078ceda not found: ID does not exist" containerID="88b98b53e9ad62aaa280c9f84338db888ecbac273e4d012d647e78ca6078ceda" Dec 05 01:45:45 crc kubenswrapper[4990]: I1205 01:45:45.065977 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88b98b53e9ad62aaa280c9f84338db888ecbac273e4d012d647e78ca6078ceda"} err="failed to get container status \"88b98b53e9ad62aaa280c9f84338db888ecbac273e4d012d647e78ca6078ceda\": rpc error: code = NotFound desc = could not find container \"88b98b53e9ad62aaa280c9f84338db888ecbac273e4d012d647e78ca6078ceda\": container with ID starting with 88b98b53e9ad62aaa280c9f84338db888ecbac273e4d012d647e78ca6078ceda not found: ID does not exist" Dec 05 01:45:45 crc kubenswrapper[4990]: I1205 01:45:45.098696 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wbxd\" (UniqueName: \"kubernetes.io/projected/ca710615-e5d1-4669-9d6c-6e6d9bc118f3-kube-api-access-2wbxd\") pod \"ca710615-e5d1-4669-9d6c-6e6d9bc118f3\" (UID: \"ca710615-e5d1-4669-9d6c-6e6d9bc118f3\") " Dec 05 01:45:45 crc kubenswrapper[4990]: I1205 01:45:45.098755 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca710615-e5d1-4669-9d6c-6e6d9bc118f3-catalog-content\") pod \"ca710615-e5d1-4669-9d6c-6e6d9bc118f3\" (UID: \"ca710615-e5d1-4669-9d6c-6e6d9bc118f3\") " Dec 05 01:45:45 crc kubenswrapper[4990]: I1205 01:45:45.098905 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca710615-e5d1-4669-9d6c-6e6d9bc118f3-utilities\") pod \"ca710615-e5d1-4669-9d6c-6e6d9bc118f3\" (UID: \"ca710615-e5d1-4669-9d6c-6e6d9bc118f3\") " Dec 05 01:45:45 crc kubenswrapper[4990]: I1205 01:45:45.099960 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca710615-e5d1-4669-9d6c-6e6d9bc118f3-utilities" (OuterVolumeSpecName: "utilities") pod "ca710615-e5d1-4669-9d6c-6e6d9bc118f3" (UID: "ca710615-e5d1-4669-9d6c-6e6d9bc118f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:45:45 crc kubenswrapper[4990]: I1205 01:45:45.108584 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca710615-e5d1-4669-9d6c-6e6d9bc118f3-kube-api-access-2wbxd" (OuterVolumeSpecName: "kube-api-access-2wbxd") pod "ca710615-e5d1-4669-9d6c-6e6d9bc118f3" (UID: "ca710615-e5d1-4669-9d6c-6e6d9bc118f3"). InnerVolumeSpecName "kube-api-access-2wbxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:45:45 crc kubenswrapper[4990]: I1205 01:45:45.153274 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca710615-e5d1-4669-9d6c-6e6d9bc118f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca710615-e5d1-4669-9d6c-6e6d9bc118f3" (UID: "ca710615-e5d1-4669-9d6c-6e6d9bc118f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:45:45 crc kubenswrapper[4990]: I1205 01:45:45.200135 4990 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca710615-e5d1-4669-9d6c-6e6d9bc118f3-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 01:45:45 crc kubenswrapper[4990]: I1205 01:45:45.200168 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wbxd\" (UniqueName: \"kubernetes.io/projected/ca710615-e5d1-4669-9d6c-6e6d9bc118f3-kube-api-access-2wbxd\") on node \"crc\" DevicePath \"\"" Dec 05 01:45:45 crc kubenswrapper[4990]: I1205 01:45:45.200180 4990 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca710615-e5d1-4669-9d6c-6e6d9bc118f3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 01:45:45 crc kubenswrapper[4990]: I1205 01:45:45.337546 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dhkq4"] Dec 05 01:45:45 crc kubenswrapper[4990]: I1205 01:45:45.348413 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dhkq4"] Dec 05 01:45:45 crc kubenswrapper[4990]: I1205 01:45:45.946109 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca710615-e5d1-4669-9d6c-6e6d9bc118f3" path="/var/lib/kubelet/pods/ca710615-e5d1-4669-9d6c-6e6d9bc118f3/volumes" Dec 05 01:46:03 crc kubenswrapper[4990]: I1205 01:46:03.473721 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xmml8"] Dec 05 01:46:03 crc kubenswrapper[4990]: E1205 01:46:03.475380 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca710615-e5d1-4669-9d6c-6e6d9bc118f3" containerName="extract-utilities" Dec 05 01:46:03 crc kubenswrapper[4990]: I1205 01:46:03.475436 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca710615-e5d1-4669-9d6c-6e6d9bc118f3" containerName="extract-utilities" Dec 05 01:46:03 crc kubenswrapper[4990]: E1205 01:46:03.475530 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca710615-e5d1-4669-9d6c-6e6d9bc118f3" containerName="extract-content" Dec 05 01:46:03 crc kubenswrapper[4990]: I1205 01:46:03.475545 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca710615-e5d1-4669-9d6c-6e6d9bc118f3" containerName="extract-content" Dec 05 01:46:03 crc kubenswrapper[4990]: E1205 01:46:03.475562 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca710615-e5d1-4669-9d6c-6e6d9bc118f3" containerName="registry-server" Dec 05 01:46:03 crc kubenswrapper[4990]: I1205 01:46:03.475575 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca710615-e5d1-4669-9d6c-6e6d9bc118f3" containerName="registry-server" Dec 05 01:46:03 crc kubenswrapper[4990]: I1205 01:46:03.475862 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca710615-e5d1-4669-9d6c-6e6d9bc118f3" containerName="registry-server" Dec 05 01:46:03 crc kubenswrapper[4990]: I1205 01:46:03.477643 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xmml8" Dec 05 01:46:03 crc kubenswrapper[4990]: I1205 01:46:03.502730 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xmml8"] Dec 05 01:46:03 crc kubenswrapper[4990]: I1205 01:46:03.582340 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mk92\" (UniqueName: \"kubernetes.io/projected/5f5ebe25-e623-4eea-a9d9-4565b651e940-kube-api-access-4mk92\") pod \"certified-operators-xmml8\" (UID: \"5f5ebe25-e623-4eea-a9d9-4565b651e940\") " pod="openshift-marketplace/certified-operators-xmml8" Dec 05 01:46:03 crc kubenswrapper[4990]: I1205 01:46:03.582451 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f5ebe25-e623-4eea-a9d9-4565b651e940-utilities\") pod \"certified-operators-xmml8\" (UID: \"5f5ebe25-e623-4eea-a9d9-4565b651e940\") " pod="openshift-marketplace/certified-operators-xmml8" Dec 05 01:46:03 crc kubenswrapper[4990]: I1205 01:46:03.582790 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f5ebe25-e623-4eea-a9d9-4565b651e940-catalog-content\") pod \"certified-operators-xmml8\" (UID: \"5f5ebe25-e623-4eea-a9d9-4565b651e940\") " pod="openshift-marketplace/certified-operators-xmml8" Dec 05 01:46:03 crc kubenswrapper[4990]: I1205 01:46:03.684554 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f5ebe25-e623-4eea-a9d9-4565b651e940-catalog-content\") pod \"certified-operators-xmml8\" (UID: \"5f5ebe25-e623-4eea-a9d9-4565b651e940\") " pod="openshift-marketplace/certified-operators-xmml8" Dec 05 01:46:03 crc kubenswrapper[4990]: I1205 01:46:03.684649 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mk92\" (UniqueName: \"kubernetes.io/projected/5f5ebe25-e623-4eea-a9d9-4565b651e940-kube-api-access-4mk92\") pod \"certified-operators-xmml8\" (UID: \"5f5ebe25-e623-4eea-a9d9-4565b651e940\") " pod="openshift-marketplace/certified-operators-xmml8" Dec 05 01:46:03 crc kubenswrapper[4990]: I1205 01:46:03.684699 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f5ebe25-e623-4eea-a9d9-4565b651e940-utilities\") pod \"certified-operators-xmml8\" (UID: \"5f5ebe25-e623-4eea-a9d9-4565b651e940\") " pod="openshift-marketplace/certified-operators-xmml8" Dec 05 01:46:03 crc kubenswrapper[4990]: I1205 01:46:03.685199 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f5ebe25-e623-4eea-a9d9-4565b651e940-utilities\") pod \"certified-operators-xmml8\" (UID: \"5f5ebe25-e623-4eea-a9d9-4565b651e940\") " pod="openshift-marketplace/certified-operators-xmml8" Dec 05 01:46:03 crc kubenswrapper[4990]: I1205 01:46:03.685200 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f5ebe25-e623-4eea-a9d9-4565b651e940-catalog-content\") pod \"certified-operators-xmml8\" (UID: \"5f5ebe25-e623-4eea-a9d9-4565b651e940\") " pod="openshift-marketplace/certified-operators-xmml8" Dec 05 01:46:03 crc kubenswrapper[4990]: I1205 01:46:03.713643 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mk92\" (UniqueName: \"kubernetes.io/projected/5f5ebe25-e623-4eea-a9d9-4565b651e940-kube-api-access-4mk92\") pod \"certified-operators-xmml8\" (UID: \"5f5ebe25-e623-4eea-a9d9-4565b651e940\") " pod="openshift-marketplace/certified-operators-xmml8" Dec 05 01:46:03 crc kubenswrapper[4990]: I1205 01:46:03.817080 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xmml8" Dec 05 01:46:04 crc kubenswrapper[4990]: I1205 01:46:04.187072 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xmml8"] Dec 05 01:46:04 crc kubenswrapper[4990]: W1205 01:46:04.198639 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f5ebe25_e623_4eea_a9d9_4565b651e940.slice/crio-36fb6ae217bb0dc2a74858f6d7d425e1dfa2325378513c1f8b23b48d264beef8 WatchSource:0}: Error finding container 36fb6ae217bb0dc2a74858f6d7d425e1dfa2325378513c1f8b23b48d264beef8: Status 404 returned error can't find the container with id 36fb6ae217bb0dc2a74858f6d7d425e1dfa2325378513c1f8b23b48d264beef8 Dec 05 01:46:05 crc kubenswrapper[4990]: I1205 01:46:05.177586 4990 generic.go:334] "Generic (PLEG): container finished" podID="5f5ebe25-e623-4eea-a9d9-4565b651e940" containerID="8ff6d6d730b9a689d86b378660fdfc3a0ebd12a2510d34df34216d02f7f5b9e9" exitCode=0 Dec 05 01:46:05 crc kubenswrapper[4990]: I1205 01:46:05.177756 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmml8" event={"ID":"5f5ebe25-e623-4eea-a9d9-4565b651e940","Type":"ContainerDied","Data":"8ff6d6d730b9a689d86b378660fdfc3a0ebd12a2510d34df34216d02f7f5b9e9"} Dec 05 01:46:05 crc kubenswrapper[4990]: I1205 01:46:05.178117 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmml8" event={"ID":"5f5ebe25-e623-4eea-a9d9-4565b651e940","Type":"ContainerStarted","Data":"36fb6ae217bb0dc2a74858f6d7d425e1dfa2325378513c1f8b23b48d264beef8"} Dec 05 01:46:06 crc kubenswrapper[4990]: I1205 01:46:06.189878 4990 generic.go:334] "Generic (PLEG): container finished" podID="5f5ebe25-e623-4eea-a9d9-4565b651e940" containerID="e71e5b21c82fee5f2d0120a3ed5be02ebf8e3147387689f0d1bc01f6cd4f4505" exitCode=0 Dec 05 01:46:06 crc kubenswrapper[4990]: I1205 01:46:06.189975 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmml8" event={"ID":"5f5ebe25-e623-4eea-a9d9-4565b651e940","Type":"ContainerDied","Data":"e71e5b21c82fee5f2d0120a3ed5be02ebf8e3147387689f0d1bc01f6cd4f4505"} Dec 05 01:46:07 crc kubenswrapper[4990]: I1205 01:46:07.202887 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmml8" event={"ID":"5f5ebe25-e623-4eea-a9d9-4565b651e940","Type":"ContainerStarted","Data":"260dbd78a417b62f9eb300dfe9ce00dab589e143f9b47fd60eb12aa4b6cd647f"} Dec 05 01:46:07 crc kubenswrapper[4990]: I1205 01:46:07.226045 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xmml8" podStartSLOduration=2.804827723 podStartE2EDuration="4.226025382s" podCreationTimestamp="2025-12-05 01:46:03 +0000 UTC" firstStartedPulling="2025-12-05 01:46:05.180241347 +0000 UTC m=+2263.556456748" lastFinishedPulling="2025-12-05 01:46:06.601439006 +0000 UTC m=+2264.977654407" observedRunningTime="2025-12-05 01:46:07.223606674 +0000 UTC m=+2265.599822055" watchObservedRunningTime="2025-12-05 01:46:07.226025382 +0000 UTC m=+2265.602240743" Dec 05 01:46:13 crc kubenswrapper[4990]: I1205 01:46:13.817964 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xmml8" Dec 05 01:46:13 crc kubenswrapper[4990]: I1205 01:46:13.819012 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xmml8" Dec 05 01:46:13 crc kubenswrapper[4990]: I1205 01:46:13.895607 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xmml8" Dec 05 01:46:14 crc kubenswrapper[4990]: I1205 01:46:14.348126 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xmml8" Dec 05 01:46:14 crc kubenswrapper[4990]: I1205 01:46:14.451252 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xmml8"] Dec 05 01:46:16 crc kubenswrapper[4990]: I1205 01:46:16.291015 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xmml8" podUID="5f5ebe25-e623-4eea-a9d9-4565b651e940" containerName="registry-server" containerID="cri-o://260dbd78a417b62f9eb300dfe9ce00dab589e143f9b47fd60eb12aa4b6cd647f" gracePeriod=2 Dec 05 01:46:16 crc kubenswrapper[4990]: I1205 01:46:16.762812 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wtj7z"] Dec 05 01:46:16 crc kubenswrapper[4990]: I1205 01:46:16.765303 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wtj7z" Dec 05 01:46:16 crc kubenswrapper[4990]: I1205 01:46:16.783309 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wtj7z"] Dec 05 01:46:16 crc kubenswrapper[4990]: I1205 01:46:16.891835 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkjsq\" (UniqueName: \"kubernetes.io/projected/7c35f677-c19e-4754-ab2c-8626f357142e-kube-api-access-hkjsq\") pod \"redhat-marketplace-wtj7z\" (UID: \"7c35f677-c19e-4754-ab2c-8626f357142e\") " pod="openshift-marketplace/redhat-marketplace-wtj7z" Dec 05 01:46:16 crc kubenswrapper[4990]: I1205 01:46:16.891897 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c35f677-c19e-4754-ab2c-8626f357142e-utilities\") pod \"redhat-marketplace-wtj7z\" (UID: \"7c35f677-c19e-4754-ab2c-8626f357142e\") " pod="openshift-marketplace/redhat-marketplace-wtj7z" Dec 05 01:46:16 crc kubenswrapper[4990]: I1205 01:46:16.891931 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c35f677-c19e-4754-ab2c-8626f357142e-catalog-content\") pod \"redhat-marketplace-wtj7z\" (UID: \"7c35f677-c19e-4754-ab2c-8626f357142e\") " pod="openshift-marketplace/redhat-marketplace-wtj7z" Dec 05 01:46:16 crc kubenswrapper[4990]: I1205 01:46:16.993089 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkjsq\" (UniqueName: \"kubernetes.io/projected/7c35f677-c19e-4754-ab2c-8626f357142e-kube-api-access-hkjsq\") pod \"redhat-marketplace-wtj7z\" (UID: \"7c35f677-c19e-4754-ab2c-8626f357142e\") " pod="openshift-marketplace/redhat-marketplace-wtj7z" Dec 05 01:46:16 crc kubenswrapper[4990]: I1205 01:46:16.993158 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c35f677-c19e-4754-ab2c-8626f357142e-utilities\") pod \"redhat-marketplace-wtj7z\" (UID: \"7c35f677-c19e-4754-ab2c-8626f357142e\") " pod="openshift-marketplace/redhat-marketplace-wtj7z" Dec 05 01:46:16 crc kubenswrapper[4990]: I1205 01:46:16.993222 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c35f677-c19e-4754-ab2c-8626f357142e-catalog-content\") pod \"redhat-marketplace-wtj7z\" (UID: \"7c35f677-c19e-4754-ab2c-8626f357142e\") " pod="openshift-marketplace/redhat-marketplace-wtj7z" Dec 05 01:46:16 crc kubenswrapper[4990]: I1205 01:46:16.993719 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c35f677-c19e-4754-ab2c-8626f357142e-catalog-content\") pod \"redhat-marketplace-wtj7z\" (UID: \"7c35f677-c19e-4754-ab2c-8626f357142e\") " pod="openshift-marketplace/redhat-marketplace-wtj7z" Dec 05 01:46:16 crc kubenswrapper[4990]: I1205 01:46:16.993650 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c35f677-c19e-4754-ab2c-8626f357142e-utilities\") pod \"redhat-marketplace-wtj7z\" (UID: \"7c35f677-c19e-4754-ab2c-8626f357142e\") " pod="openshift-marketplace/redhat-marketplace-wtj7z" Dec 05 01:46:17 crc kubenswrapper[4990]: I1205 01:46:17.017634 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkjsq\" (UniqueName: \"kubernetes.io/projected/7c35f677-c19e-4754-ab2c-8626f357142e-kube-api-access-hkjsq\") pod \"redhat-marketplace-wtj7z\" (UID: \"7c35f677-c19e-4754-ab2c-8626f357142e\") " pod="openshift-marketplace/redhat-marketplace-wtj7z" Dec 05 01:46:17 crc kubenswrapper[4990]: I1205 01:46:17.102539 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wtj7z" Dec 05 01:46:17 crc kubenswrapper[4990]: I1205 01:46:17.301473 4990 generic.go:334] "Generic (PLEG): container finished" podID="5f5ebe25-e623-4eea-a9d9-4565b651e940" containerID="260dbd78a417b62f9eb300dfe9ce00dab589e143f9b47fd60eb12aa4b6cd647f" exitCode=0 Dec 05 01:46:17 crc kubenswrapper[4990]: I1205 01:46:17.301533 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmml8" event={"ID":"5f5ebe25-e623-4eea-a9d9-4565b651e940","Type":"ContainerDied","Data":"260dbd78a417b62f9eb300dfe9ce00dab589e143f9b47fd60eb12aa4b6cd647f"} Dec 05 01:46:17 crc kubenswrapper[4990]: I1205 01:46:17.587110 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wtj7z"] Dec 05 01:46:17 crc kubenswrapper[4990]: I1205 01:46:17.867309 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xmml8" Dec 05 01:46:18 crc kubenswrapper[4990]: I1205 01:46:18.011599 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f5ebe25-e623-4eea-a9d9-4565b651e940-utilities\") pod \"5f5ebe25-e623-4eea-a9d9-4565b651e940\" (UID: \"5f5ebe25-e623-4eea-a9d9-4565b651e940\") " Dec 05 01:46:18 crc kubenswrapper[4990]: I1205 01:46:18.011721 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mk92\" (UniqueName: \"kubernetes.io/projected/5f5ebe25-e623-4eea-a9d9-4565b651e940-kube-api-access-4mk92\") pod \"5f5ebe25-e623-4eea-a9d9-4565b651e940\" (UID: \"5f5ebe25-e623-4eea-a9d9-4565b651e940\") " Dec 05 01:46:18 crc kubenswrapper[4990]: I1205 01:46:18.011853 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f5ebe25-e623-4eea-a9d9-4565b651e940-catalog-content\") pod \"5f5ebe25-e623-4eea-a9d9-4565b651e940\" (UID: \"5f5ebe25-e623-4eea-a9d9-4565b651e940\") " Dec 05 01:46:18 crc kubenswrapper[4990]: I1205 01:46:18.013091 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f5ebe25-e623-4eea-a9d9-4565b651e940-utilities" (OuterVolumeSpecName: "utilities") pod "5f5ebe25-e623-4eea-a9d9-4565b651e940" (UID: "5f5ebe25-e623-4eea-a9d9-4565b651e940"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:46:18 crc kubenswrapper[4990]: I1205 01:46:18.017927 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f5ebe25-e623-4eea-a9d9-4565b651e940-kube-api-access-4mk92" (OuterVolumeSpecName: "kube-api-access-4mk92") pod "5f5ebe25-e623-4eea-a9d9-4565b651e940" (UID: "5f5ebe25-e623-4eea-a9d9-4565b651e940"). InnerVolumeSpecName "kube-api-access-4mk92". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:46:18 crc kubenswrapper[4990]: I1205 01:46:18.060119 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f5ebe25-e623-4eea-a9d9-4565b651e940-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f5ebe25-e623-4eea-a9d9-4565b651e940" (UID: "5f5ebe25-e623-4eea-a9d9-4565b651e940"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:46:18 crc kubenswrapper[4990]: I1205 01:46:18.113227 4990 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f5ebe25-e623-4eea-a9d9-4565b651e940-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 01:46:18 crc kubenswrapper[4990]: I1205 01:46:18.113264 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mk92\" (UniqueName: \"kubernetes.io/projected/5f5ebe25-e623-4eea-a9d9-4565b651e940-kube-api-access-4mk92\") on node \"crc\" DevicePath \"\"" Dec 05 01:46:18 crc kubenswrapper[4990]: I1205 01:46:18.113274 4990 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f5ebe25-e623-4eea-a9d9-4565b651e940-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 01:46:18 crc kubenswrapper[4990]: I1205 01:46:18.316083 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmml8" event={"ID":"5f5ebe25-e623-4eea-a9d9-4565b651e940","Type":"ContainerDied","Data":"36fb6ae217bb0dc2a74858f6d7d425e1dfa2325378513c1f8b23b48d264beef8"} Dec 05 01:46:18 crc kubenswrapper[4990]: I1205 01:46:18.316183 4990 scope.go:117] "RemoveContainer" containerID="260dbd78a417b62f9eb300dfe9ce00dab589e143f9b47fd60eb12aa4b6cd647f" Dec 05 01:46:18 crc kubenswrapper[4990]: I1205 01:46:18.316198 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xmml8" Dec 05 01:46:18 crc kubenswrapper[4990]: I1205 01:46:18.318740 4990 generic.go:334] "Generic (PLEG): container finished" podID="7c35f677-c19e-4754-ab2c-8626f357142e" containerID="f589df8fa746835fd329758c99b841db83f626428781bf7a48af5f31513f38f7" exitCode=0 Dec 05 01:46:18 crc kubenswrapper[4990]: I1205 01:46:18.318848 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtj7z" event={"ID":"7c35f677-c19e-4754-ab2c-8626f357142e","Type":"ContainerDied","Data":"f589df8fa746835fd329758c99b841db83f626428781bf7a48af5f31513f38f7"} Dec 05 01:46:18 crc kubenswrapper[4990]: I1205 01:46:18.318922 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtj7z" event={"ID":"7c35f677-c19e-4754-ab2c-8626f357142e","Type":"ContainerStarted","Data":"91fb7eb230333d9373ee5514149a5709eaa377af0e805b6d27ab142a73390391"} Dec 05 01:46:18 crc kubenswrapper[4990]: I1205 01:46:18.347350 4990 scope.go:117] "RemoveContainer" containerID="e71e5b21c82fee5f2d0120a3ed5be02ebf8e3147387689f0d1bc01f6cd4f4505" Dec 05 01:46:18 crc kubenswrapper[4990]: I1205 01:46:18.385408 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xmml8"] Dec 05 01:46:18 crc kubenswrapper[4990]: I1205 01:46:18.390289 4990 scope.go:117] "RemoveContainer" containerID="8ff6d6d730b9a689d86b378660fdfc3a0ebd12a2510d34df34216d02f7f5b9e9" Dec 05 01:46:18 crc kubenswrapper[4990]: I1205 01:46:18.416078 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xmml8"] Dec 05 01:46:19 crc kubenswrapper[4990]: I1205 01:46:19.328948 4990 generic.go:334] "Generic (PLEG): container finished" podID="7c35f677-c19e-4754-ab2c-8626f357142e" containerID="b47babc947653fe0ebe5602ae7b55b9706473b1d358809a25090d5b1fe8ac856" exitCode=0 Dec 05 01:46:19 crc kubenswrapper[4990]: I1205 01:46:19.329051 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtj7z" event={"ID":"7c35f677-c19e-4754-ab2c-8626f357142e","Type":"ContainerDied","Data":"b47babc947653fe0ebe5602ae7b55b9706473b1d358809a25090d5b1fe8ac856"} Dec 05 01:46:19 crc kubenswrapper[4990]: I1205 01:46:19.942837 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f5ebe25-e623-4eea-a9d9-4565b651e940" path="/var/lib/kubelet/pods/5f5ebe25-e623-4eea-a9d9-4565b651e940/volumes" Dec 05 01:46:20 crc kubenswrapper[4990]: I1205 01:46:20.342994 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtj7z" event={"ID":"7c35f677-c19e-4754-ab2c-8626f357142e","Type":"ContainerStarted","Data":"3b0324f325f3ef9373c9c11cf659100786884c3dbc77fec5e59113b33358edb6"} Dec 05 01:46:20 crc kubenswrapper[4990]: I1205 01:46:20.377637 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wtj7z" podStartSLOduration=2.921535831 podStartE2EDuration="4.37761154s" podCreationTimestamp="2025-12-05 01:46:16 +0000 UTC" firstStartedPulling="2025-12-05 01:46:18.321255268 +0000 UTC m=+2276.697470659" lastFinishedPulling="2025-12-05 01:46:19.777330977 +0000 UTC m=+2278.153546368" observedRunningTime="2025-12-05 01:46:20.367930458 +0000 UTC m=+2278.744145859" watchObservedRunningTime="2025-12-05 01:46:20.37761154 +0000 UTC m=+2278.753826911" Dec 05 01:46:27 crc kubenswrapper[4990]: I1205 01:46:27.103675 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wtj7z" Dec 05 01:46:27 crc kubenswrapper[4990]: I1205 01:46:27.104287 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wtj7z" Dec 05 01:46:27 crc kubenswrapper[4990]: I1205 01:46:27.184965 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wtj7z" Dec 05 01:46:27 crc kubenswrapper[4990]: I1205 01:46:27.481694 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wtj7z" Dec 05 01:46:27 crc kubenswrapper[4990]: I1205 01:46:27.545314 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wtj7z"] Dec 05 01:46:29 crc kubenswrapper[4990]: I1205 01:46:29.427713 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wtj7z" podUID="7c35f677-c19e-4754-ab2c-8626f357142e" containerName="registry-server" containerID="cri-o://3b0324f325f3ef9373c9c11cf659100786884c3dbc77fec5e59113b33358edb6" gracePeriod=2 Dec 05 01:46:30 crc kubenswrapper[4990]: I1205 01:46:30.419984 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wtj7z" Dec 05 01:46:30 crc kubenswrapper[4990]: I1205 01:46:30.439200 4990 generic.go:334] "Generic (PLEG): container finished" podID="7c35f677-c19e-4754-ab2c-8626f357142e" containerID="3b0324f325f3ef9373c9c11cf659100786884c3dbc77fec5e59113b33358edb6" exitCode=0 Dec 05 01:46:30 crc kubenswrapper[4990]: I1205 01:46:30.439269 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wtj7z" Dec 05 01:46:30 crc kubenswrapper[4990]: I1205 01:46:30.439288 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtj7z" event={"ID":"7c35f677-c19e-4754-ab2c-8626f357142e","Type":"ContainerDied","Data":"3b0324f325f3ef9373c9c11cf659100786884c3dbc77fec5e59113b33358edb6"} Dec 05 01:46:30 crc kubenswrapper[4990]: I1205 01:46:30.439613 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtj7z" event={"ID":"7c35f677-c19e-4754-ab2c-8626f357142e","Type":"ContainerDied","Data":"91fb7eb230333d9373ee5514149a5709eaa377af0e805b6d27ab142a73390391"} Dec 05 01:46:30 crc kubenswrapper[4990]: I1205 01:46:30.439661 4990 scope.go:117] "RemoveContainer" containerID="3b0324f325f3ef9373c9c11cf659100786884c3dbc77fec5e59113b33358edb6" Dec 05 01:46:30 crc kubenswrapper[4990]: I1205 01:46:30.468825 4990 scope.go:117] "RemoveContainer" containerID="b47babc947653fe0ebe5602ae7b55b9706473b1d358809a25090d5b1fe8ac856" Dec 05 01:46:30 crc kubenswrapper[4990]: I1205 01:46:30.494747 4990 scope.go:117] "RemoveContainer" containerID="f589df8fa746835fd329758c99b841db83f626428781bf7a48af5f31513f38f7" Dec 05 01:46:30 crc kubenswrapper[4990]: I1205 01:46:30.510268 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c35f677-c19e-4754-ab2c-8626f357142e-catalog-content\") pod \"7c35f677-c19e-4754-ab2c-8626f357142e\" (UID: \"7c35f677-c19e-4754-ab2c-8626f357142e\") " Dec 05 01:46:30 crc kubenswrapper[4990]: I1205 01:46:30.510390 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkjsq\" (UniqueName: \"kubernetes.io/projected/7c35f677-c19e-4754-ab2c-8626f357142e-kube-api-access-hkjsq\") pod \"7c35f677-c19e-4754-ab2c-8626f357142e\" (UID: \"7c35f677-c19e-4754-ab2c-8626f357142e\") " Dec 05 01:46:30 crc kubenswrapper[4990]: I1205 01:46:30.510564 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c35f677-c19e-4754-ab2c-8626f357142e-utilities\") pod \"7c35f677-c19e-4754-ab2c-8626f357142e\" (UID: \"7c35f677-c19e-4754-ab2c-8626f357142e\") " Dec 05 01:46:30 crc kubenswrapper[4990]: I1205 01:46:30.511860 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c35f677-c19e-4754-ab2c-8626f357142e-utilities" (OuterVolumeSpecName: "utilities") pod "7c35f677-c19e-4754-ab2c-8626f357142e" (UID: "7c35f677-c19e-4754-ab2c-8626f357142e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:46:30 crc kubenswrapper[4990]: I1205 01:46:30.513994 4990 scope.go:117] "RemoveContainer" containerID="3b0324f325f3ef9373c9c11cf659100786884c3dbc77fec5e59113b33358edb6" Dec 05 01:46:30 crc kubenswrapper[4990]: E1205 01:46:30.514424 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b0324f325f3ef9373c9c11cf659100786884c3dbc77fec5e59113b33358edb6\": container with ID starting with 3b0324f325f3ef9373c9c11cf659100786884c3dbc77fec5e59113b33358edb6 not found: ID does not exist" containerID="3b0324f325f3ef9373c9c11cf659100786884c3dbc77fec5e59113b33358edb6" Dec 05 01:46:30 crc kubenswrapper[4990]: I1205 01:46:30.514459 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b0324f325f3ef9373c9c11cf659100786884c3dbc77fec5e59113b33358edb6"} err="failed to get container status \"3b0324f325f3ef9373c9c11cf659100786884c3dbc77fec5e59113b33358edb6\": rpc error: code = NotFound desc = could not find container \"3b0324f325f3ef9373c9c11cf659100786884c3dbc77fec5e59113b33358edb6\": container with ID starting with 3b0324f325f3ef9373c9c11cf659100786884c3dbc77fec5e59113b33358edb6 not found: ID does not exist" Dec 05 01:46:30 crc kubenswrapper[4990]: I1205 01:46:30.514500 4990 scope.go:117] "RemoveContainer" containerID="b47babc947653fe0ebe5602ae7b55b9706473b1d358809a25090d5b1fe8ac856" Dec 05 01:46:30 crc kubenswrapper[4990]: E1205 01:46:30.514897 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b47babc947653fe0ebe5602ae7b55b9706473b1d358809a25090d5b1fe8ac856\": container with ID starting with b47babc947653fe0ebe5602ae7b55b9706473b1d358809a25090d5b1fe8ac856 not found: ID does not exist" containerID="b47babc947653fe0ebe5602ae7b55b9706473b1d358809a25090d5b1fe8ac856" Dec 05 01:46:30 crc kubenswrapper[4990]: I1205 01:46:30.514945 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b47babc947653fe0ebe5602ae7b55b9706473b1d358809a25090d5b1fe8ac856"} err="failed to get container status \"b47babc947653fe0ebe5602ae7b55b9706473b1d358809a25090d5b1fe8ac856\": rpc error: code = NotFound desc = could not find container \"b47babc947653fe0ebe5602ae7b55b9706473b1d358809a25090d5b1fe8ac856\": container with ID starting with b47babc947653fe0ebe5602ae7b55b9706473b1d358809a25090d5b1fe8ac856 not found: ID does not exist" Dec 05 01:46:30 crc kubenswrapper[4990]: I1205 01:46:30.514977 4990 scope.go:117] "RemoveContainer" containerID="f589df8fa746835fd329758c99b841db83f626428781bf7a48af5f31513f38f7" Dec 05 01:46:30 crc kubenswrapper[4990]: E1205 01:46:30.515339 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f589df8fa746835fd329758c99b841db83f626428781bf7a48af5f31513f38f7\": container with ID starting with f589df8fa746835fd329758c99b841db83f626428781bf7a48af5f31513f38f7 not found: ID does not exist" containerID="f589df8fa746835fd329758c99b841db83f626428781bf7a48af5f31513f38f7" Dec 05 01:46:30 crc kubenswrapper[4990]: I1205 01:46:30.515391 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f589df8fa746835fd329758c99b841db83f626428781bf7a48af5f31513f38f7"} err="failed to get container status \"f589df8fa746835fd329758c99b841db83f626428781bf7a48af5f31513f38f7\": rpc error: code = NotFound desc = could not find container \"f589df8fa746835fd329758c99b841db83f626428781bf7a48af5f31513f38f7\": container with ID starting with f589df8fa746835fd329758c99b841db83f626428781bf7a48af5f31513f38f7 not found: ID does not exist" Dec 05 01:46:30 crc kubenswrapper[4990]: I1205 01:46:30.517406 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c35f677-c19e-4754-ab2c-8626f357142e-kube-api-access-hkjsq" (OuterVolumeSpecName: "kube-api-access-hkjsq") pod "7c35f677-c19e-4754-ab2c-8626f357142e" (UID: "7c35f677-c19e-4754-ab2c-8626f357142e"). InnerVolumeSpecName "kube-api-access-hkjsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:46:30 crc kubenswrapper[4990]: I1205 01:46:30.535478 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c35f677-c19e-4754-ab2c-8626f357142e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c35f677-c19e-4754-ab2c-8626f357142e" (UID: "7c35f677-c19e-4754-ab2c-8626f357142e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:46:30 crc kubenswrapper[4990]: I1205 01:46:30.612637 4990 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c35f677-c19e-4754-ab2c-8626f357142e-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 01:46:30 crc kubenswrapper[4990]: I1205 01:46:30.612714 4990 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c35f677-c19e-4754-ab2c-8626f357142e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 01:46:30 crc kubenswrapper[4990]: I1205 01:46:30.612737 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkjsq\" (UniqueName: \"kubernetes.io/projected/7c35f677-c19e-4754-ab2c-8626f357142e-kube-api-access-hkjsq\") on node \"crc\" DevicePath \"\"" Dec 05 01:46:30 crc kubenswrapper[4990]: I1205 01:46:30.784971 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wtj7z"] Dec 05 01:46:30 crc kubenswrapper[4990]: I1205 01:46:30.796318 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wtj7z"] Dec 05 01:46:31 crc kubenswrapper[4990]: I1205 01:46:31.948347 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c35f677-c19e-4754-ab2c-8626f357142e" path="/var/lib/kubelet/pods/7c35f677-c19e-4754-ab2c-8626f357142e/volumes" Dec 05 01:47:21 crc kubenswrapper[4990]: I1205 01:47:21.824818 4990 patch_prober.go:28] interesting pod/machine-config-daemon-zxlh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 01:47:21 crc kubenswrapper[4990]: I1205 01:47:21.825451 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 01:47:51 crc kubenswrapper[4990]: I1205 01:47:51.823522 4990 patch_prober.go:28] interesting pod/machine-config-daemon-zxlh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 01:47:51 crc kubenswrapper[4990]: I1205 01:47:51.825335 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 01:48:21 crc kubenswrapper[4990]: I1205 01:48:21.823906 4990 patch_prober.go:28] interesting pod/machine-config-daemon-zxlh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 01:48:21 crc kubenswrapper[4990]: I1205 01:48:21.824565 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 01:48:21 crc kubenswrapper[4990]: I1205 01:48:21.824650 4990 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" Dec 05 01:48:21 crc kubenswrapper[4990]: I1205 01:48:21.825749 4990 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"80c7e485f9e3d44bd1a65b64ef3565f8b56151fc906368383324d650757be398"} pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 01:48:21 crc kubenswrapper[4990]: I1205 01:48:21.825855 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" containerID="cri-o://80c7e485f9e3d44bd1a65b64ef3565f8b56151fc906368383324d650757be398" gracePeriod=600 Dec 05 01:48:21 crc kubenswrapper[4990]: E1205 01:48:21.959313 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:48:22 crc kubenswrapper[4990]: I1205 01:48:22.500957 4990 generic.go:334] "Generic (PLEG): container finished" podID="b6580a04-67de-48f9-9da2-56cb4377af48" containerID="80c7e485f9e3d44bd1a65b64ef3565f8b56151fc906368383324d650757be398" exitCode=0 Dec 05 01:48:22 crc kubenswrapper[4990]: I1205 01:48:22.501080 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" event={"ID":"b6580a04-67de-48f9-9da2-56cb4377af48","Type":"ContainerDied","Data":"80c7e485f9e3d44bd1a65b64ef3565f8b56151fc906368383324d650757be398"} Dec 05 01:48:22 crc kubenswrapper[4990]: I1205 01:48:22.501191 4990 scope.go:117] "RemoveContainer" containerID="332c1c66c0c63ad31a8eb8eb91157daf8f3f3a515aaf78c4c65bb7a7320c8f26" Dec 05 01:48:22 crc kubenswrapper[4990]: I1205 01:48:22.502222 4990 scope.go:117] "RemoveContainer" containerID="80c7e485f9e3d44bd1a65b64ef3565f8b56151fc906368383324d650757be398" Dec 05 01:48:22 crc kubenswrapper[4990]: E1205 01:48:22.502710 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:48:35 crc kubenswrapper[4990]: I1205 01:48:35.931619 4990 scope.go:117] "RemoveContainer" containerID="80c7e485f9e3d44bd1a65b64ef3565f8b56151fc906368383324d650757be398" Dec 05 01:48:35 crc kubenswrapper[4990]: E1205 01:48:35.932765 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:48:50 crc kubenswrapper[4990]: I1205 01:48:50.930425 4990 scope.go:117] "RemoveContainer" containerID="80c7e485f9e3d44bd1a65b64ef3565f8b56151fc906368383324d650757be398" Dec 05 01:48:50 crc kubenswrapper[4990]: E1205 01:48:50.931327 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:49:03 crc kubenswrapper[4990]: I1205 01:49:03.930998 4990 scope.go:117] "RemoveContainer" containerID="80c7e485f9e3d44bd1a65b64ef3565f8b56151fc906368383324d650757be398" Dec 05 01:49:03 crc kubenswrapper[4990]: E1205 01:49:03.931973 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:49:14 crc kubenswrapper[4990]: I1205 01:49:14.931213 4990 scope.go:117] "RemoveContainer" containerID="80c7e485f9e3d44bd1a65b64ef3565f8b56151fc906368383324d650757be398" Dec 05 01:49:14 crc kubenswrapper[4990]: E1205 01:49:14.932026 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:49:25 crc kubenswrapper[4990]: I1205 01:49:25.931501 4990 scope.go:117] "RemoveContainer" containerID="80c7e485f9e3d44bd1a65b64ef3565f8b56151fc906368383324d650757be398" Dec 05 01:49:25 crc kubenswrapper[4990]: E1205 01:49:25.933247 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:49:40 crc kubenswrapper[4990]: I1205 01:49:40.930293 4990 scope.go:117] "RemoveContainer" containerID="80c7e485f9e3d44bd1a65b64ef3565f8b56151fc906368383324d650757be398" Dec 05 01:49:40 crc kubenswrapper[4990]: E1205 01:49:40.931224 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:49:54 crc kubenswrapper[4990]: I1205 01:49:54.931815 4990 scope.go:117] "RemoveContainer" containerID="80c7e485f9e3d44bd1a65b64ef3565f8b56151fc906368383324d650757be398" Dec 05 01:49:54 crc kubenswrapper[4990]: E1205 01:49:54.933638 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:50:07 crc kubenswrapper[4990]: I1205 01:50:07.930819 4990 scope.go:117] "RemoveContainer" containerID="80c7e485f9e3d44bd1a65b64ef3565f8b56151fc906368383324d650757be398" Dec 05 01:50:07 crc kubenswrapper[4990]: E1205 01:50:07.931854 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:50:21 crc kubenswrapper[4990]: I1205 01:50:21.939053 4990 scope.go:117] "RemoveContainer" containerID="80c7e485f9e3d44bd1a65b64ef3565f8b56151fc906368383324d650757be398" Dec 05 01:50:21 crc kubenswrapper[4990]: E1205 01:50:21.941478 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:50:34 crc kubenswrapper[4990]: I1205 01:50:34.930249 4990 scope.go:117] "RemoveContainer" containerID="80c7e485f9e3d44bd1a65b64ef3565f8b56151fc906368383324d650757be398" Dec 05 01:50:34 crc kubenswrapper[4990]: E1205 01:50:34.931061 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:50:48 crc kubenswrapper[4990]: I1205 01:50:48.931296 4990 scope.go:117] "RemoveContainer" containerID="80c7e485f9e3d44bd1a65b64ef3565f8b56151fc906368383324d650757be398" Dec 05 01:50:48 crc kubenswrapper[4990]: E1205 01:50:48.932633 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:51:01 crc kubenswrapper[4990]: I1205 01:51:01.938368 4990 scope.go:117] "RemoveContainer" containerID="80c7e485f9e3d44bd1a65b64ef3565f8b56151fc906368383324d650757be398" Dec 05 01:51:01 crc kubenswrapper[4990]: E1205 01:51:01.940324 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:51:13 crc kubenswrapper[4990]: I1205 01:51:13.930170 4990 scope.go:117] "RemoveContainer" containerID="80c7e485f9e3d44bd1a65b64ef3565f8b56151fc906368383324d650757be398" Dec 05 01:51:13 crc kubenswrapper[4990]: E1205 01:51:13.931026 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:51:25 crc kubenswrapper[4990]: I1205 01:51:25.931274 4990 scope.go:117] "RemoveContainer" containerID="80c7e485f9e3d44bd1a65b64ef3565f8b56151fc906368383324d650757be398" Dec 05 01:51:25 crc kubenswrapper[4990]: E1205 01:51:25.932107 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:51:39 crc kubenswrapper[4990]: I1205 01:51:39.931020 4990 scope.go:117] "RemoveContainer" containerID="80c7e485f9e3d44bd1a65b64ef3565f8b56151fc906368383324d650757be398" Dec 05 01:51:39 crc kubenswrapper[4990]: E1205 01:51:39.932231 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:51:52 crc kubenswrapper[4990]: I1205 01:51:52.930655 4990 scope.go:117] "RemoveContainer" containerID="80c7e485f9e3d44bd1a65b64ef3565f8b56151fc906368383324d650757be398" Dec 05 01:51:52 crc kubenswrapper[4990]: E1205 01:51:52.931632 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:52:00 crc kubenswrapper[4990]: I1205 01:52:00.160894 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-h8j8c/must-gather-kj8nr"] Dec 05 01:52:00 crc kubenswrapper[4990]: E1205 01:52:00.161583 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f5ebe25-e623-4eea-a9d9-4565b651e940" containerName="extract-utilities" Dec 05 01:52:00 crc kubenswrapper[4990]: I1205 01:52:00.161594 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f5ebe25-e623-4eea-a9d9-4565b651e940" containerName="extract-utilities" Dec 05 01:52:00 crc kubenswrapper[4990]: E1205 01:52:00.161604 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f5ebe25-e623-4eea-a9d9-4565b651e940" containerName="registry-server" Dec 05 01:52:00 crc kubenswrapper[4990]: I1205 01:52:00.161610 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f5ebe25-e623-4eea-a9d9-4565b651e940" containerName="registry-server" Dec 05 01:52:00 crc kubenswrapper[4990]: E1205 01:52:00.161627 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c35f677-c19e-4754-ab2c-8626f357142e" containerName="extract-utilities" Dec 05 01:52:00 crc kubenswrapper[4990]: I1205 01:52:00.161634 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c35f677-c19e-4754-ab2c-8626f357142e" containerName="extract-utilities" Dec 05 01:52:00 crc kubenswrapper[4990]: E1205 01:52:00.161652 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c35f677-c19e-4754-ab2c-8626f357142e" containerName="extract-content" Dec 05 01:52:00 crc kubenswrapper[4990]: I1205 01:52:00.161658 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c35f677-c19e-4754-ab2c-8626f357142e" containerName="extract-content" Dec 05 01:52:00 crc kubenswrapper[4990]: E1205 01:52:00.161667 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f5ebe25-e623-4eea-a9d9-4565b651e940" containerName="extract-content" Dec 05 01:52:00 crc kubenswrapper[4990]: I1205 01:52:00.161673 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f5ebe25-e623-4eea-a9d9-4565b651e940" containerName="extract-content" Dec 05 01:52:00 crc kubenswrapper[4990]: E1205 01:52:00.161684 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c35f677-c19e-4754-ab2c-8626f357142e" containerName="registry-server" Dec 05 01:52:00 crc kubenswrapper[4990]: I1205 01:52:00.161708 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c35f677-c19e-4754-ab2c-8626f357142e" containerName="registry-server" Dec 05 01:52:00 crc kubenswrapper[4990]: I1205 01:52:00.161876 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c35f677-c19e-4754-ab2c-8626f357142e" containerName="registry-server" Dec 05 01:52:00 crc kubenswrapper[4990]: I1205 01:52:00.161885 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f5ebe25-e623-4eea-a9d9-4565b651e940" containerName="registry-server" Dec 05 01:52:00 crc kubenswrapper[4990]: I1205 01:52:00.162619 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h8j8c/must-gather-kj8nr" Dec 05 01:52:00 crc kubenswrapper[4990]: I1205 01:52:00.164307 4990 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-h8j8c"/"default-dockercfg-h7d6w" Dec 05 01:52:00 crc kubenswrapper[4990]: I1205 01:52:00.164399 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-h8j8c"/"openshift-service-ca.crt" Dec 05 01:52:00 crc kubenswrapper[4990]: I1205 01:52:00.165136 4990 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-h8j8c"/"kube-root-ca.crt" Dec 05 01:52:00 crc kubenswrapper[4990]: I1205 01:52:00.190338 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-h8j8c/must-gather-kj8nr"] Dec 05 01:52:00 crc kubenswrapper[4990]: I1205 01:52:00.228838 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7wcw\" (UniqueName: \"kubernetes.io/projected/88cea1db-485d-4652-949a-c6178c2a866c-kube-api-access-j7wcw\") pod \"must-gather-kj8nr\" (UID: \"88cea1db-485d-4652-949a-c6178c2a866c\") " pod="openshift-must-gather-h8j8c/must-gather-kj8nr" Dec 05 01:52:00 crc kubenswrapper[4990]: I1205 01:52:00.229131 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/88cea1db-485d-4652-949a-c6178c2a866c-must-gather-output\") pod \"must-gather-kj8nr\" (UID: \"88cea1db-485d-4652-949a-c6178c2a866c\") " pod="openshift-must-gather-h8j8c/must-gather-kj8nr" Dec 05 01:52:00 crc kubenswrapper[4990]: I1205 01:52:00.331197 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/88cea1db-485d-4652-949a-c6178c2a866c-must-gather-output\") pod \"must-gather-kj8nr\" (UID: \"88cea1db-485d-4652-949a-c6178c2a866c\") " pod="openshift-must-gather-h8j8c/must-gather-kj8nr" Dec 05 01:52:00 crc kubenswrapper[4990]: I1205 01:52:00.331394 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7wcw\" (UniqueName: \"kubernetes.io/projected/88cea1db-485d-4652-949a-c6178c2a866c-kube-api-access-j7wcw\") pod \"must-gather-kj8nr\" (UID: \"88cea1db-485d-4652-949a-c6178c2a866c\") " pod="openshift-must-gather-h8j8c/must-gather-kj8nr" Dec 05 01:52:00 crc kubenswrapper[4990]: I1205 01:52:00.331719 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/88cea1db-485d-4652-949a-c6178c2a866c-must-gather-output\") pod \"must-gather-kj8nr\" (UID: \"88cea1db-485d-4652-949a-c6178c2a866c\") " pod="openshift-must-gather-h8j8c/must-gather-kj8nr" Dec 05 01:52:00 crc kubenswrapper[4990]: I1205 01:52:00.351269 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7wcw\" (UniqueName: \"kubernetes.io/projected/88cea1db-485d-4652-949a-c6178c2a866c-kube-api-access-j7wcw\") pod \"must-gather-kj8nr\" (UID: \"88cea1db-485d-4652-949a-c6178c2a866c\") " pod="openshift-must-gather-h8j8c/must-gather-kj8nr" Dec 05 01:52:00 crc kubenswrapper[4990]: I1205 01:52:00.479770 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h8j8c/must-gather-kj8nr" Dec 05 01:52:00 crc kubenswrapper[4990]: I1205 01:52:00.914240 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-h8j8c/must-gather-kj8nr"] Dec 05 01:52:00 crc kubenswrapper[4990]: I1205 01:52:00.917240 4990 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 01:52:01 crc kubenswrapper[4990]: I1205 01:52:01.562390 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h8j8c/must-gather-kj8nr" event={"ID":"88cea1db-485d-4652-949a-c6178c2a866c","Type":"ContainerStarted","Data":"23f1e56d642aec03d1dcd071abc5bb44b3915da75d0da3c8e1ac6ca9ae3c5372"} Dec 05 01:52:05 crc kubenswrapper[4990]: I1205 01:52:05.593327 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h8j8c/must-gather-kj8nr" event={"ID":"88cea1db-485d-4652-949a-c6178c2a866c","Type":"ContainerStarted","Data":"10c613f25d7338df90e6bcabcbe4cf4615e40dc7af3a8ff1597a40c23d7c2972"} Dec 05 01:52:05 crc kubenswrapper[4990]: I1205 01:52:05.593943 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h8j8c/must-gather-kj8nr" event={"ID":"88cea1db-485d-4652-949a-c6178c2a866c","Type":"ContainerStarted","Data":"16072f91340c1d4178db084fbecd44398f5f11eb7173935e8a28cf4e8900bcba"} Dec 05 01:52:05 crc kubenswrapper[4990]: I1205 01:52:05.615565 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-h8j8c/must-gather-kj8nr" podStartSLOduration=1.806932223 podStartE2EDuration="5.615538799s" podCreationTimestamp="2025-12-05 01:52:00 +0000 UTC" firstStartedPulling="2025-12-05 01:52:00.916833504 +0000 UTC m=+2619.293048885" lastFinishedPulling="2025-12-05 01:52:04.72544006 +0000 UTC m=+2623.101655461" observedRunningTime="2025-12-05 01:52:05.610956611 +0000 UTC m=+2623.987172012" watchObservedRunningTime="2025-12-05 01:52:05.615538799 +0000 UTC m=+2623.991754200" Dec 05 01:52:06 crc kubenswrapper[4990]: I1205 01:52:06.930367 4990 scope.go:117] "RemoveContainer" containerID="80c7e485f9e3d44bd1a65b64ef3565f8b56151fc906368383324d650757be398" Dec 05 01:52:06 crc kubenswrapper[4990]: E1205 01:52:06.930970 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:52:21 crc kubenswrapper[4990]: I1205 01:52:21.936973 4990 scope.go:117] "RemoveContainer" containerID="80c7e485f9e3d44bd1a65b64ef3565f8b56151fc906368383324d650757be398" Dec 05 01:52:21 crc kubenswrapper[4990]: E1205 01:52:21.937981 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:52:36 crc kubenswrapper[4990]: I1205 01:52:36.930115 4990 scope.go:117] "RemoveContainer" containerID="80c7e485f9e3d44bd1a65b64ef3565f8b56151fc906368383324d650757be398" Dec 05 01:52:36 crc kubenswrapper[4990]: E1205 01:52:36.930924 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:52:48 crc kubenswrapper[4990]: I1205 01:52:48.931238 4990 scope.go:117] "RemoveContainer" containerID="80c7e485f9e3d44bd1a65b64ef3565f8b56151fc906368383324d650757be398" Dec 05 01:52:48 crc kubenswrapper[4990]: E1205 01:52:48.932279 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:53:01 crc kubenswrapper[4990]: I1205 01:53:01.332933 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk_25369b8e-20b7-4826-821e-f4db1d2e533f/util/0.log" Dec 05 01:53:01 crc kubenswrapper[4990]: I1205 01:53:01.459372 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk_25369b8e-20b7-4826-821e-f4db1d2e533f/util/0.log" Dec 05 01:53:01 crc kubenswrapper[4990]: I1205 01:53:01.486961 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk_25369b8e-20b7-4826-821e-f4db1d2e533f/pull/0.log" Dec 05 01:53:01 crc kubenswrapper[4990]: I1205 01:53:01.508795 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk_25369b8e-20b7-4826-821e-f4db1d2e533f/pull/0.log" Dec 05 01:53:01 crc kubenswrapper[4990]: I1205 01:53:01.665829 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk_25369b8e-20b7-4826-821e-f4db1d2e533f/pull/0.log" Dec 05 01:53:01 crc kubenswrapper[4990]: I1205 01:53:01.673783 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk_25369b8e-20b7-4826-821e-f4db1d2e533f/extract/0.log" Dec 05 01:53:01 crc kubenswrapper[4990]: I1205 01:53:01.694606 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_58c134150ba87862a072ee99906936458cca3557b039fbc67862da7cffmd8vk_25369b8e-20b7-4826-821e-f4db1d2e533f/util/0.log" Dec 05 01:53:01 crc kubenswrapper[4990]: I1205 01:53:01.823834 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-6fgdz_e908e515-9470-4a27-912f-a266a4ffe3a9/kube-rbac-proxy/0.log" Dec 05 01:53:01 crc kubenswrapper[4990]: I1205 01:53:01.900972 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-6fgdz_e908e515-9470-4a27-912f-a266a4ffe3a9/manager/0.log" Dec 05 01:53:01 crc kubenswrapper[4990]: I1205 01:53:01.924982 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-t42g2_a95995a7-92e3-40c0-8fad-30e47ea759e1/kube-rbac-proxy/0.log" Dec 05 01:53:02 crc kubenswrapper[4990]: I1205 01:53:02.024616 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-t42g2_a95995a7-92e3-40c0-8fad-30e47ea759e1/manager/0.log" Dec 05 01:53:02 crc kubenswrapper[4990]: I1205 01:53:02.095147 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-t29m4_63a6f5c3-f437-478f-b72c-afcae7a4dba8/kube-rbac-proxy/0.log" Dec 05 01:53:02 crc kubenswrapper[4990]: I1205 01:53:02.136243 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-t29m4_63a6f5c3-f437-478f-b72c-afcae7a4dba8/manager/0.log" Dec 05 01:53:02 crc kubenswrapper[4990]: I1205 01:53:02.294439 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-wgzhs_c19a83c4-e130-47a2-81d2-04dbea61d6c1/kube-rbac-proxy/0.log" Dec 05 01:53:02 crc kubenswrapper[4990]: I1205 01:53:02.336282 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-wgzhs_c19a83c4-e130-47a2-81d2-04dbea61d6c1/manager/0.log" Dec 05 01:53:02 crc kubenswrapper[4990]: I1205 01:53:02.437596 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-h8nnl_f54f5881-49fa-4cfa-88d9-20d0b0d9c082/kube-rbac-proxy/0.log" Dec 05 01:53:02 crc kubenswrapper[4990]: I1205 01:53:02.454328 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-h8nnl_f54f5881-49fa-4cfa-88d9-20d0b0d9c082/manager/0.log" Dec 05 01:53:02 crc kubenswrapper[4990]: I1205 01:53:02.512221 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-fqcx2_53d5cea9-4a5f-4663-8511-4e830d5c86bc/kube-rbac-proxy/0.log" Dec 05 01:53:02 crc kubenswrapper[4990]: I1205 01:53:02.606147 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-fqcx2_53d5cea9-4a5f-4663-8511-4e830d5c86bc/manager/0.log" Dec 05 01:53:02 crc kubenswrapper[4990]: I1205 01:53:02.699376 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-b47qg_8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9/kube-rbac-proxy/0.log" Dec 05 01:53:02 crc kubenswrapper[4990]: I1205 01:53:02.815209 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-p8zs5_45e4a6f1-ff34-4e14-8a58-c3e88d998169/kube-rbac-proxy/0.log" Dec 05 01:53:02 crc kubenswrapper[4990]: I1205 01:53:02.835896 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-b47qg_8d7cb9a8-32ec-40ef-8504-8f280c6ad2e9/manager/0.log" Dec 05 01:53:02 crc kubenswrapper[4990]: I1205 01:53:02.914584 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-p8zs5_45e4a6f1-ff34-4e14-8a58-c3e88d998169/manager/0.log" Dec 05 01:53:02 crc kubenswrapper[4990]: I1205 01:53:02.930010 4990 scope.go:117] "RemoveContainer" containerID="80c7e485f9e3d44bd1a65b64ef3565f8b56151fc906368383324d650757be398" Dec 05 01:53:02 crc kubenswrapper[4990]: E1205 01:53:02.930229 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:53:02 crc kubenswrapper[4990]: I1205 01:53:02.975808 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-lr2g9_9d1a9c70-0d24-476f-a857-b06e637e24b5/kube-rbac-proxy/0.log" Dec 05 01:53:03 crc kubenswrapper[4990]: I1205 01:53:03.051648 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-lr2g9_9d1a9c70-0d24-476f-a857-b06e637e24b5/manager/0.log" Dec 05 01:53:03 crc kubenswrapper[4990]: I1205 01:53:03.161325 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-sqqtl_55e02f15-1f53-4eb9-84fb-61a260485ebf/kube-rbac-proxy/0.log" Dec 05 01:53:03 crc kubenswrapper[4990]: I1205 01:53:03.172125 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-sqqtl_55e02f15-1f53-4eb9-84fb-61a260485ebf/manager/0.log" Dec 05 01:53:03 crc kubenswrapper[4990]: I1205 01:53:03.362810 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-88qgd_b7da1eba-30c5-45a4-819d-7aef2af480c8/kube-rbac-proxy/0.log" Dec 05 01:53:03 crc kubenswrapper[4990]: I1205 01:53:03.365682 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-88qgd_b7da1eba-30c5-45a4-819d-7aef2af480c8/manager/0.log" Dec 05 01:53:03 crc kubenswrapper[4990]: I1205 01:53:03.435067 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-9mdd5_f6574447-fe34-4fc6-a99d-8f9898a73019/kube-rbac-proxy/0.log" Dec 05 01:53:03 crc kubenswrapper[4990]: I1205 01:53:03.553881 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-9mdd5_f6574447-fe34-4fc6-a99d-8f9898a73019/manager/0.log" Dec 05 01:53:03 crc kubenswrapper[4990]: I1205 01:53:03.584334 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-mhqsp_a25b5669-b148-428b-a654-4a1effd836f5/kube-rbac-proxy/0.log" Dec 05 01:53:03 crc kubenswrapper[4990]: I1205 01:53:03.689269 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-mhqsp_a25b5669-b148-428b-a654-4a1effd836f5/manager/0.log" Dec 05 01:53:03 crc kubenswrapper[4990]: I1205 01:53:03.754944 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-k7zzp_4ac7ce06-d864-4577-a628-201945f57f8a/kube-rbac-proxy/0.log" Dec 05 01:53:03 crc kubenswrapper[4990]: I1205 01:53:03.796592 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-k7zzp_4ac7ce06-d864-4577-a628-201945f57f8a/manager/0.log" Dec 05 01:53:03 crc kubenswrapper[4990]: I1205 01:53:03.901645 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt_58d8a7d1-7337-4d4f-ae63-04862be6a86a/kube-rbac-proxy/0.log" Dec 05 01:53:03 crc kubenswrapper[4990]: I1205 01:53:03.913452 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4sh7wt_58d8a7d1-7337-4d4f-ae63-04862be6a86a/manager/0.log" Dec 05 01:53:04 crc kubenswrapper[4990]: I1205 01:53:04.251327 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-6f79d9dccc-fzjsf_14a886d8-c123-4618-84d2-ba3b0e29ac4b/operator/0.log" Dec 05 01:53:04 crc kubenswrapper[4990]: I1205 01:53:04.269807 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-2rn2k_a781f496-a9fd-4e90-8030-853e55d8c7d9/registry-server/0.log" Dec 05 01:53:04 crc kubenswrapper[4990]: I1205 01:53:04.443397 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-wk9np_438c87d1-af5c-42ee-988c-82d88ebd6439/kube-rbac-proxy/0.log" Dec 05 01:53:04 crc kubenswrapper[4990]: I1205 01:53:04.505219 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-wk9np_438c87d1-af5c-42ee-988c-82d88ebd6439/manager/0.log" Dec 05 01:53:04 crc kubenswrapper[4990]: I1205 01:53:04.543266 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-gc6bg_b80a8ebf-4453-4f97-9bc3-9c3d8371b868/kube-rbac-proxy/0.log" Dec 05 01:53:04 crc kubenswrapper[4990]: I1205 01:53:04.630450 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-gc6bg_b80a8ebf-4453-4f97-9bc3-9c3d8371b868/manager/0.log" Dec 05 01:53:04 crc kubenswrapper[4990]: I1205 01:53:04.704439 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-79966545b7-krksl_dec00109-2be0-4153-86df-7ad985b1f396/manager/0.log" Dec 05 01:53:04 crc kubenswrapper[4990]: I1205 01:53:04.729894 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-6g29k_4aeaf936-39c4-4558-bba8-c47839e79431/operator/0.log" Dec 05 01:53:04 crc kubenswrapper[4990]: I1205 01:53:04.817873 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-4fl28_683b019b-d147-4c85-b537-e4000a14dfed/kube-rbac-proxy/0.log" Dec 05 01:53:04 crc kubenswrapper[4990]: I1205 01:53:04.874782 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-4fl28_683b019b-d147-4c85-b537-e4000a14dfed/manager/0.log" Dec 05 01:53:04 crc kubenswrapper[4990]: I1205 01:53:04.911941 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-jfnfg_4879650d-849b-496e-b8de-92dde4a62982/kube-rbac-proxy/0.log" Dec 05 01:53:04 crc kubenswrapper[4990]: I1205 01:53:04.991820 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-jfnfg_4879650d-849b-496e-b8de-92dde4a62982/manager/0.log" Dec 05 01:53:05 crc kubenswrapper[4990]: I1205 01:53:05.032649 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-lfhcj_bd1e0999-c2d5-4712-b995-18e7778231cf/kube-rbac-proxy/0.log" Dec 05 01:53:05 crc kubenswrapper[4990]: I1205 01:53:05.072003 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-lfhcj_bd1e0999-c2d5-4712-b995-18e7778231cf/manager/0.log" Dec 05 01:53:05 crc kubenswrapper[4990]: I1205 01:53:05.147264 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-mv9jg_307385f4-34c6-473a-a3d6-c0be9a334b68/kube-rbac-proxy/0.log" Dec 05 01:53:05 crc kubenswrapper[4990]: I1205 01:53:05.207595 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-mv9jg_307385f4-34c6-473a-a3d6-c0be9a334b68/manager/0.log" Dec 05 01:53:17 crc kubenswrapper[4990]: I1205 01:53:17.930914 4990 scope.go:117] "RemoveContainer" containerID="80c7e485f9e3d44bd1a65b64ef3565f8b56151fc906368383324d650757be398" Dec 05 01:53:17 crc kubenswrapper[4990]: E1205 01:53:17.931748 4990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zxlh5_openshift-machine-config-operator(b6580a04-67de-48f9-9da2-56cb4377af48)\"" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" Dec 05 01:53:24 crc kubenswrapper[4990]: I1205 01:53:24.198161 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-7rszv_1a0a0305-99f9-45d5-b298-383c5f6cc4f6/control-plane-machine-set-operator/0.log" Dec 05 01:53:24 crc kubenswrapper[4990]: I1205 01:53:24.367630 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-t5772_f9467079-b825-4a3d-b56c-254057a3b5fb/kube-rbac-proxy/0.log" Dec 05 01:53:24 crc kubenswrapper[4990]: I1205 01:53:24.373529 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-t5772_f9467079-b825-4a3d-b56c-254057a3b5fb/machine-api-operator/0.log" Dec 05 01:53:32 crc kubenswrapper[4990]: I1205 01:53:32.931340 4990 scope.go:117] "RemoveContainer" containerID="80c7e485f9e3d44bd1a65b64ef3565f8b56151fc906368383324d650757be398" Dec 05 01:53:33 crc kubenswrapper[4990]: I1205 01:53:33.278085 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" event={"ID":"b6580a04-67de-48f9-9da2-56cb4377af48","Type":"ContainerStarted","Data":"f25f002653a5de7ace0b883c87587722f0570000f68e8d959c1c7196ab15b8c7"} Dec 05 01:53:37 crc kubenswrapper[4990]: I1205 01:53:37.517199 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-j6bkz_81603236-0e38-4834-8f5f-031321e5862c/cert-manager-controller/0.log" Dec 05 01:53:37 crc kubenswrapper[4990]: I1205 01:53:37.702469 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-7rd9j_11e7cf47-261e-4451-b1c0-b325a8236fae/cert-manager-cainjector/0.log" Dec 05 01:53:37 crc kubenswrapper[4990]: I1205 01:53:37.772831 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-86w72_92a9406b-f802-4b93-b392-5121fb343101/cert-manager-webhook/0.log" Dec 05 01:53:50 crc kubenswrapper[4990]: I1205 01:53:50.419258 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-f48rv_2ddd952b-ae34-497f-b7d0-e428cb8eb66a/nmstate-console-plugin/0.log" Dec 05 01:53:50 crc kubenswrapper[4990]: I1205 01:53:50.539369 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-mgpb7_4b233ec7-934e-4cef-a18b-8b8c9f36e23e/nmstate-handler/0.log" Dec 05 01:53:50 crc kubenswrapper[4990]: I1205 01:53:50.592627 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-wckwv_5ee24fe7-1614-4d3b-8501-c7c1cdf4449f/kube-rbac-proxy/0.log" Dec 05 01:53:50 crc kubenswrapper[4990]: I1205 01:53:50.598901 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-wckwv_5ee24fe7-1614-4d3b-8501-c7c1cdf4449f/nmstate-metrics/0.log" Dec 05 01:53:50 crc kubenswrapper[4990]: I1205 01:53:50.739732 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-jbx5x_0e74c706-ea18-4a4f-8056-bba53a53edf9/nmstate-operator/0.log" Dec 05 01:53:50 crc kubenswrapper[4990]: I1205 01:53:50.774131 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-w7vq9_a3077203-24e9-4351-8ba8-5bcaa5942894/nmstate-webhook/0.log" Dec 05 01:54:05 crc kubenswrapper[4990]: I1205 01:54:05.507078 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-877sl_8c02539b-f293-4f43-94ef-aefdd98984bc/kube-rbac-proxy/0.log" Dec 05 01:54:05 crc kubenswrapper[4990]: I1205 01:54:05.750890 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2ptxd_c36c2427-0455-4931-afa2-940f33ce9854/cp-frr-files/0.log" Dec 05 01:54:05 crc kubenswrapper[4990]: I1205 01:54:05.813466 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-877sl_8c02539b-f293-4f43-94ef-aefdd98984bc/controller/0.log" Dec 05 01:54:05 crc kubenswrapper[4990]: I1205 01:54:05.928528 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2ptxd_c36c2427-0455-4931-afa2-940f33ce9854/cp-metrics/0.log" Dec 05 01:54:05 crc kubenswrapper[4990]: I1205 01:54:05.928555 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2ptxd_c36c2427-0455-4931-afa2-940f33ce9854/cp-reloader/0.log" Dec 05 01:54:05 crc kubenswrapper[4990]: I1205 01:54:05.930058 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2ptxd_c36c2427-0455-4931-afa2-940f33ce9854/cp-frr-files/0.log" Dec 05 01:54:05 crc kubenswrapper[4990]: I1205 01:54:05.985561 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2ptxd_c36c2427-0455-4931-afa2-940f33ce9854/cp-reloader/0.log" Dec 05 01:54:06 crc kubenswrapper[4990]: I1205 01:54:06.156025 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2ptxd_c36c2427-0455-4931-afa2-940f33ce9854/cp-reloader/0.log" Dec 05 01:54:06 crc kubenswrapper[4990]: I1205 01:54:06.157594 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2ptxd_c36c2427-0455-4931-afa2-940f33ce9854/cp-metrics/0.log" Dec 05 01:54:06 crc kubenswrapper[4990]: I1205 01:54:06.165038 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2ptxd_c36c2427-0455-4931-afa2-940f33ce9854/cp-frr-files/0.log" Dec 05 01:54:06 crc kubenswrapper[4990]: I1205 01:54:06.170518 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2ptxd_c36c2427-0455-4931-afa2-940f33ce9854/cp-metrics/0.log" Dec 05 01:54:06 crc kubenswrapper[4990]: I1205 01:54:06.309338 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2ptxd_c36c2427-0455-4931-afa2-940f33ce9854/cp-frr-files/0.log" Dec 05 01:54:06 crc kubenswrapper[4990]: I1205 01:54:06.317834 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2ptxd_c36c2427-0455-4931-afa2-940f33ce9854/cp-reloader/0.log" Dec 05 01:54:06 crc kubenswrapper[4990]: I1205 01:54:06.320377 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2ptxd_c36c2427-0455-4931-afa2-940f33ce9854/cp-metrics/0.log" Dec 05 01:54:06 crc kubenswrapper[4990]: I1205 01:54:06.338970 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2ptxd_c36c2427-0455-4931-afa2-940f33ce9854/controller/0.log" Dec 05 01:54:06 crc kubenswrapper[4990]: I1205 01:54:06.472426 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2ptxd_c36c2427-0455-4931-afa2-940f33ce9854/kube-rbac-proxy/0.log" Dec 05 01:54:06 crc kubenswrapper[4990]: I1205 01:54:06.487474 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2ptxd_c36c2427-0455-4931-afa2-940f33ce9854/frr-metrics/0.log" Dec 05 01:54:06 crc kubenswrapper[4990]: I1205 01:54:06.523747 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2ptxd_c36c2427-0455-4931-afa2-940f33ce9854/kube-rbac-proxy-frr/0.log" Dec 05 01:54:06 crc kubenswrapper[4990]: I1205 01:54:06.637249 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2ptxd_c36c2427-0455-4931-afa2-940f33ce9854/reloader/0.log" Dec 05 01:54:06 crc kubenswrapper[4990]: I1205 01:54:06.742388 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-kscb8_602b14df-eebd-4d44-bf25-721b9f11fc17/frr-k8s-webhook-server/0.log" Dec 05 01:54:06 crc kubenswrapper[4990]: I1205 01:54:06.908801 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7558d5d6d4-pk6pd_8e061a9c-0157-408c-85e1-bec1856d263e/manager/0.log" Dec 05 01:54:07 crc kubenswrapper[4990]: I1205 01:54:07.067789 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5fd747f769-nctkb_7bb0a2c9-b86f-4393-a0f8-30e0d52aac17/webhook-server/0.log" Dec 05 01:54:07 crc kubenswrapper[4990]: I1205 01:54:07.103320 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9bw6h_d0b0ae55-f1c8-4437-83cb-188142db3523/kube-rbac-proxy/0.log" Dec 05 01:54:07 crc kubenswrapper[4990]: I1205 01:54:07.527663 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9bw6h_d0b0ae55-f1c8-4437-83cb-188142db3523/speaker/0.log" Dec 05 01:54:07 crc kubenswrapper[4990]: I1205 01:54:07.538591 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2ptxd_c36c2427-0455-4931-afa2-940f33ce9854/frr/0.log" Dec 05 01:54:20 crc kubenswrapper[4990]: I1205 01:54:20.988403 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv_f475647f-cf4a-47da-844d-a2952b514ea0/util/0.log" Dec 05 01:54:21 crc kubenswrapper[4990]: I1205 01:54:21.203626 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv_f475647f-cf4a-47da-844d-a2952b514ea0/util/0.log" Dec 05 01:54:21 crc kubenswrapper[4990]: I1205 01:54:21.223510 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv_f475647f-cf4a-47da-844d-a2952b514ea0/pull/0.log" Dec 05 01:54:21 crc kubenswrapper[4990]: I1205 01:54:21.243262 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv_f475647f-cf4a-47da-844d-a2952b514ea0/pull/0.log" Dec 05 01:54:21 crc kubenswrapper[4990]: I1205 01:54:21.392921 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv_f475647f-cf4a-47da-844d-a2952b514ea0/util/0.log" Dec 05 01:54:21 crc kubenswrapper[4990]: I1205 01:54:21.393547 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv_f475647f-cf4a-47da-844d-a2952b514ea0/extract/0.log" Dec 05 01:54:21 crc kubenswrapper[4990]: I1205 01:54:21.410726 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2w8cv_f475647f-cf4a-47da-844d-a2952b514ea0/pull/0.log" Dec 05 01:54:21 crc kubenswrapper[4990]: I1205 01:54:21.550961 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z_3cc34209-d840-4863-b29e-98d64972e9c7/util/0.log" Dec 05 01:54:21 crc kubenswrapper[4990]: I1205 01:54:21.728432 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z_3cc34209-d840-4863-b29e-98d64972e9c7/pull/0.log" Dec 05 01:54:21 crc kubenswrapper[4990]: I1205 01:54:21.730324 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z_3cc34209-d840-4863-b29e-98d64972e9c7/pull/0.log" Dec 05 01:54:21 crc kubenswrapper[4990]: I1205 01:54:21.773156 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z_3cc34209-d840-4863-b29e-98d64972e9c7/util/0.log" Dec 05 01:54:21 crc kubenswrapper[4990]: I1205 01:54:21.935361 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z_3cc34209-d840-4863-b29e-98d64972e9c7/pull/0.log" Dec 05 01:54:21 crc kubenswrapper[4990]: I1205 01:54:21.969692 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z_3cc34209-d840-4863-b29e-98d64972e9c7/util/0.log" Dec 05 01:54:22 crc kubenswrapper[4990]: I1205 01:54:22.008459 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fr8s5z_3cc34209-d840-4863-b29e-98d64972e9c7/extract/0.log" Dec 05 01:54:22 crc kubenswrapper[4990]: I1205 01:54:22.148415 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9_f17d56f5-716f-4187-b328-abee78a41a82/util/0.log" Dec 05 01:54:22 crc kubenswrapper[4990]: I1205 01:54:22.280257 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9_f17d56f5-716f-4187-b328-abee78a41a82/util/0.log" Dec 05 01:54:22 crc kubenswrapper[4990]: I1205 01:54:22.302047 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9_f17d56f5-716f-4187-b328-abee78a41a82/pull/0.log" Dec 05 01:54:22 crc kubenswrapper[4990]: I1205 01:54:22.334122 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9_f17d56f5-716f-4187-b328-abee78a41a82/pull/0.log" Dec 05 01:54:22 crc kubenswrapper[4990]: I1205 01:54:22.484534 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9_f17d56f5-716f-4187-b328-abee78a41a82/pull/0.log" Dec 05 01:54:22 crc kubenswrapper[4990]: I1205 01:54:22.515222 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9_f17d56f5-716f-4187-b328-abee78a41a82/util/0.log" Dec 05 01:54:22 crc kubenswrapper[4990]: I1205 01:54:22.556327 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2jb9_f17d56f5-716f-4187-b328-abee78a41a82/extract/0.log" Dec 05 01:54:22 crc kubenswrapper[4990]: I1205 01:54:22.672120 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cb787_ddfc56ca-24fc-4541-8068-4185d01d16c1/extract-utilities/0.log" Dec 05 01:54:22 crc kubenswrapper[4990]: I1205 01:54:22.893643 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cb787_ddfc56ca-24fc-4541-8068-4185d01d16c1/extract-utilities/0.log" Dec 05 01:54:22 crc kubenswrapper[4990]: I1205 01:54:22.922741 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cb787_ddfc56ca-24fc-4541-8068-4185d01d16c1/extract-content/0.log" Dec 05 01:54:22 crc kubenswrapper[4990]: I1205 01:54:22.947821 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cb787_ddfc56ca-24fc-4541-8068-4185d01d16c1/extract-content/0.log" Dec 05 01:54:23 crc kubenswrapper[4990]: I1205 01:54:23.082145 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cb787_ddfc56ca-24fc-4541-8068-4185d01d16c1/extract-content/0.log" Dec 05 01:54:23 crc kubenswrapper[4990]: I1205 01:54:23.082957 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cb787_ddfc56ca-24fc-4541-8068-4185d01d16c1/extract-utilities/0.log" Dec 05 01:54:23 crc kubenswrapper[4990]: I1205 01:54:23.294403 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-whvp8_9a7d8297-4858-4ff7-ac43-3ee1771383a8/extract-utilities/0.log" Dec 05 01:54:23 crc kubenswrapper[4990]: I1205 01:54:23.490682 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-whvp8_9a7d8297-4858-4ff7-ac43-3ee1771383a8/extract-utilities/0.log" Dec 05 01:54:23 crc kubenswrapper[4990]: I1205 01:54:23.494170 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cb787_ddfc56ca-24fc-4541-8068-4185d01d16c1/registry-server/0.log" Dec 05 01:54:23 crc kubenswrapper[4990]: I1205 01:54:23.520260 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-whvp8_9a7d8297-4858-4ff7-ac43-3ee1771383a8/extract-content/0.log" Dec 05 01:54:23 crc kubenswrapper[4990]: I1205 01:54:23.521458 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-whvp8_9a7d8297-4858-4ff7-ac43-3ee1771383a8/extract-content/0.log" Dec 05 01:54:23 crc kubenswrapper[4990]: I1205 01:54:23.658137 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-whvp8_9a7d8297-4858-4ff7-ac43-3ee1771383a8/extract-content/0.log" Dec 05 01:54:23 crc kubenswrapper[4990]: I1205 01:54:23.679540 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-whvp8_9a7d8297-4858-4ff7-ac43-3ee1771383a8/extract-utilities/0.log" Dec 05 01:54:23 crc kubenswrapper[4990]: I1205 01:54:23.894190 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-lz2ns_24357b8a-a4f6-43dd-ac9f-d563fa8762d4/marketplace-operator/0.log" Dec 05 01:54:23 crc kubenswrapper[4990]: I1205 01:54:23.928553 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lmm8t_a65b7b02-ae3b-432a-815e-f38c1b2beb4d/extract-utilities/0.log" Dec 05 01:54:24 crc kubenswrapper[4990]: I1205 01:54:24.073711 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lmm8t_a65b7b02-ae3b-432a-815e-f38c1b2beb4d/extract-utilities/0.log" Dec 05 01:54:24 crc kubenswrapper[4990]: I1205 01:54:24.168861 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-whvp8_9a7d8297-4858-4ff7-ac43-3ee1771383a8/registry-server/0.log" Dec 05 01:54:24 crc kubenswrapper[4990]: I1205 01:54:24.202635 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lmm8t_a65b7b02-ae3b-432a-815e-f38c1b2beb4d/extract-content/0.log" Dec 05 01:54:24 crc kubenswrapper[4990]: I1205 01:54:24.216712 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lmm8t_a65b7b02-ae3b-432a-815e-f38c1b2beb4d/extract-content/0.log" Dec 05 01:54:24 crc kubenswrapper[4990]: I1205 01:54:24.267214 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lmm8t_a65b7b02-ae3b-432a-815e-f38c1b2beb4d/extract-utilities/0.log" Dec 05 01:54:24 crc kubenswrapper[4990]: I1205 01:54:24.371844 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lmm8t_a65b7b02-ae3b-432a-815e-f38c1b2beb4d/extract-content/0.log" Dec 05 01:54:24 crc kubenswrapper[4990]: I1205 01:54:24.429056 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lmm8t_a65b7b02-ae3b-432a-815e-f38c1b2beb4d/registry-server/0.log" Dec 05 01:54:24 crc kubenswrapper[4990]: I1205 01:54:24.528336 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5wwxs_3336377a-d1c7-4c90-a824-63aa62dd945c/extract-utilities/0.log" Dec 05 01:54:24 crc kubenswrapper[4990]: I1205 01:54:24.709129 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5wwxs_3336377a-d1c7-4c90-a824-63aa62dd945c/extract-content/0.log" Dec 05 01:54:24 crc kubenswrapper[4990]: I1205 01:54:24.730408 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5wwxs_3336377a-d1c7-4c90-a824-63aa62dd945c/extract-content/0.log" Dec 05 01:54:24 crc kubenswrapper[4990]: I1205 01:54:24.760129 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5wwxs_3336377a-d1c7-4c90-a824-63aa62dd945c/extract-utilities/0.log" Dec 05 01:54:24 crc kubenswrapper[4990]: I1205 01:54:24.911887 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5wwxs_3336377a-d1c7-4c90-a824-63aa62dd945c/extract-content/0.log" Dec 05 01:54:24 crc kubenswrapper[4990]: I1205 01:54:24.935402 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5wwxs_3336377a-d1c7-4c90-a824-63aa62dd945c/extract-utilities/0.log" Dec 05 01:54:25 crc kubenswrapper[4990]: I1205 01:54:25.254594 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5wwxs_3336377a-d1c7-4c90-a824-63aa62dd945c/registry-server/0.log" Dec 05 01:55:29 crc kubenswrapper[4990]: I1205 01:55:29.180533 4990 generic.go:334] "Generic (PLEG): container finished" podID="88cea1db-485d-4652-949a-c6178c2a866c" containerID="16072f91340c1d4178db084fbecd44398f5f11eb7173935e8a28cf4e8900bcba" exitCode=0 Dec 05 01:55:29 crc kubenswrapper[4990]: I1205 01:55:29.180630 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h8j8c/must-gather-kj8nr" event={"ID":"88cea1db-485d-4652-949a-c6178c2a866c","Type":"ContainerDied","Data":"16072f91340c1d4178db084fbecd44398f5f11eb7173935e8a28cf4e8900bcba"} Dec 05 01:55:29 crc kubenswrapper[4990]: I1205 01:55:29.182174 4990 scope.go:117] "RemoveContainer" containerID="16072f91340c1d4178db084fbecd44398f5f11eb7173935e8a28cf4e8900bcba" Dec 05 01:55:29 crc kubenswrapper[4990]: I1205 01:55:29.638238 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-h8j8c_must-gather-kj8nr_88cea1db-485d-4652-949a-c6178c2a866c/gather/0.log" Dec 05 01:55:36 crc kubenswrapper[4990]: I1205 01:55:36.515665 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-h8j8c/must-gather-kj8nr"] Dec 05 01:55:36 crc kubenswrapper[4990]: I1205 01:55:36.516327 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-h8j8c/must-gather-kj8nr" podUID="88cea1db-485d-4652-949a-c6178c2a866c" containerName="copy" containerID="cri-o://10c613f25d7338df90e6bcabcbe4cf4615e40dc7af3a8ff1597a40c23d7c2972" gracePeriod=2 Dec 05 01:55:36 crc kubenswrapper[4990]: I1205 01:55:36.523220 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-h8j8c/must-gather-kj8nr"] Dec 05 01:55:36 crc kubenswrapper[4990]: I1205 01:55:36.878498 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-h8j8c_must-gather-kj8nr_88cea1db-485d-4652-949a-c6178c2a866c/copy/0.log" Dec 05 01:55:36 crc kubenswrapper[4990]: I1205 01:55:36.879181 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h8j8c/must-gather-kj8nr" Dec 05 01:55:36 crc kubenswrapper[4990]: I1205 01:55:36.988753 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/88cea1db-485d-4652-949a-c6178c2a866c-must-gather-output\") pod \"88cea1db-485d-4652-949a-c6178c2a866c\" (UID: \"88cea1db-485d-4652-949a-c6178c2a866c\") " Dec 05 01:55:36 crc kubenswrapper[4990]: I1205 01:55:36.989272 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7wcw\" (UniqueName: \"kubernetes.io/projected/88cea1db-485d-4652-949a-c6178c2a866c-kube-api-access-j7wcw\") pod \"88cea1db-485d-4652-949a-c6178c2a866c\" (UID: \"88cea1db-485d-4652-949a-c6178c2a866c\") " Dec 05 01:55:36 crc kubenswrapper[4990]: I1205 01:55:36.994911 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88cea1db-485d-4652-949a-c6178c2a866c-kube-api-access-j7wcw" (OuterVolumeSpecName: "kube-api-access-j7wcw") pod "88cea1db-485d-4652-949a-c6178c2a866c" (UID: "88cea1db-485d-4652-949a-c6178c2a866c"). InnerVolumeSpecName "kube-api-access-j7wcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:55:37 crc kubenswrapper[4990]: I1205 01:55:37.076774 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88cea1db-485d-4652-949a-c6178c2a866c-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "88cea1db-485d-4652-949a-c6178c2a866c" (UID: "88cea1db-485d-4652-949a-c6178c2a866c"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:55:37 crc kubenswrapper[4990]: I1205 01:55:37.090747 4990 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/88cea1db-485d-4652-949a-c6178c2a866c-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 05 01:55:37 crc kubenswrapper[4990]: I1205 01:55:37.090779 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7wcw\" (UniqueName: \"kubernetes.io/projected/88cea1db-485d-4652-949a-c6178c2a866c-kube-api-access-j7wcw\") on node \"crc\" DevicePath \"\"" Dec 05 01:55:37 crc kubenswrapper[4990]: I1205 01:55:37.249403 4990 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-h8j8c_must-gather-kj8nr_88cea1db-485d-4652-949a-c6178c2a866c/copy/0.log" Dec 05 01:55:37 crc kubenswrapper[4990]: I1205 01:55:37.249887 4990 generic.go:334] "Generic (PLEG): container finished" podID="88cea1db-485d-4652-949a-c6178c2a866c" containerID="10c613f25d7338df90e6bcabcbe4cf4615e40dc7af3a8ff1597a40c23d7c2972" exitCode=143 Dec 05 01:55:37 crc kubenswrapper[4990]: I1205 01:55:37.249946 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h8j8c/must-gather-kj8nr" Dec 05 01:55:37 crc kubenswrapper[4990]: I1205 01:55:37.249980 4990 scope.go:117] "RemoveContainer" containerID="10c613f25d7338df90e6bcabcbe4cf4615e40dc7af3a8ff1597a40c23d7c2972" Dec 05 01:55:37 crc kubenswrapper[4990]: I1205 01:55:37.285742 4990 scope.go:117] "RemoveContainer" containerID="16072f91340c1d4178db084fbecd44398f5f11eb7173935e8a28cf4e8900bcba" Dec 05 01:55:37 crc kubenswrapper[4990]: I1205 01:55:37.353998 4990 scope.go:117] "RemoveContainer" containerID="10c613f25d7338df90e6bcabcbe4cf4615e40dc7af3a8ff1597a40c23d7c2972" Dec 05 01:55:37 crc kubenswrapper[4990]: E1205 01:55:37.355060 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10c613f25d7338df90e6bcabcbe4cf4615e40dc7af3a8ff1597a40c23d7c2972\": container with ID starting with 10c613f25d7338df90e6bcabcbe4cf4615e40dc7af3a8ff1597a40c23d7c2972 not found: ID does not exist" containerID="10c613f25d7338df90e6bcabcbe4cf4615e40dc7af3a8ff1597a40c23d7c2972" Dec 05 01:55:37 crc kubenswrapper[4990]: I1205 01:55:37.355119 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10c613f25d7338df90e6bcabcbe4cf4615e40dc7af3a8ff1597a40c23d7c2972"} err="failed to get container status \"10c613f25d7338df90e6bcabcbe4cf4615e40dc7af3a8ff1597a40c23d7c2972\": rpc error: code = NotFound desc = could not find container \"10c613f25d7338df90e6bcabcbe4cf4615e40dc7af3a8ff1597a40c23d7c2972\": container with ID starting with 10c613f25d7338df90e6bcabcbe4cf4615e40dc7af3a8ff1597a40c23d7c2972 not found: ID does not exist" Dec 05 01:55:37 crc kubenswrapper[4990]: I1205 01:55:37.355240 4990 scope.go:117] "RemoveContainer" containerID="16072f91340c1d4178db084fbecd44398f5f11eb7173935e8a28cf4e8900bcba" Dec 05 01:55:37 crc kubenswrapper[4990]: E1205 01:55:37.356005 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16072f91340c1d4178db084fbecd44398f5f11eb7173935e8a28cf4e8900bcba\": container with ID starting with 16072f91340c1d4178db084fbecd44398f5f11eb7173935e8a28cf4e8900bcba not found: ID does not exist" containerID="16072f91340c1d4178db084fbecd44398f5f11eb7173935e8a28cf4e8900bcba" Dec 05 01:55:37 crc kubenswrapper[4990]: I1205 01:55:37.356069 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16072f91340c1d4178db084fbecd44398f5f11eb7173935e8a28cf4e8900bcba"} err="failed to get container status \"16072f91340c1d4178db084fbecd44398f5f11eb7173935e8a28cf4e8900bcba\": rpc error: code = NotFound desc = could not find container \"16072f91340c1d4178db084fbecd44398f5f11eb7173935e8a28cf4e8900bcba\": container with ID starting with 16072f91340c1d4178db084fbecd44398f5f11eb7173935e8a28cf4e8900bcba not found: ID does not exist" Dec 05 01:55:37 crc kubenswrapper[4990]: I1205 01:55:37.940519 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88cea1db-485d-4652-949a-c6178c2a866c" path="/var/lib/kubelet/pods/88cea1db-485d-4652-949a-c6178c2a866c/volumes" Dec 05 01:55:51 crc kubenswrapper[4990]: I1205 01:55:51.824077 4990 patch_prober.go:28] interesting pod/machine-config-daemon-zxlh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 01:55:51 crc kubenswrapper[4990]: I1205 01:55:51.824553 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 01:55:58 crc kubenswrapper[4990]: I1205 01:55:58.483741 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mjw7t"] Dec 05 01:55:58 crc kubenswrapper[4990]: E1205 01:55:58.484838 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88cea1db-485d-4652-949a-c6178c2a866c" containerName="copy" Dec 05 01:55:58 crc kubenswrapper[4990]: I1205 01:55:58.484861 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="88cea1db-485d-4652-949a-c6178c2a866c" containerName="copy" Dec 05 01:55:58 crc kubenswrapper[4990]: E1205 01:55:58.484879 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88cea1db-485d-4652-949a-c6178c2a866c" containerName="gather" Dec 05 01:55:58 crc kubenswrapper[4990]: I1205 01:55:58.484892 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="88cea1db-485d-4652-949a-c6178c2a866c" containerName="gather" Dec 05 01:55:58 crc kubenswrapper[4990]: I1205 01:55:58.485146 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="88cea1db-485d-4652-949a-c6178c2a866c" containerName="gather" Dec 05 01:55:58 crc kubenswrapper[4990]: I1205 01:55:58.485173 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="88cea1db-485d-4652-949a-c6178c2a866c" containerName="copy" Dec 05 01:55:58 crc kubenswrapper[4990]: I1205 01:55:58.486906 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mjw7t" Dec 05 01:55:58 crc kubenswrapper[4990]: I1205 01:55:58.509046 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mjw7t"] Dec 05 01:55:58 crc kubenswrapper[4990]: I1205 01:55:58.624844 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d4zf\" (UniqueName: \"kubernetes.io/projected/fbb69e0d-f6e5-460d-b998-af7085549720-kube-api-access-6d4zf\") pod \"redhat-operators-mjw7t\" (UID: \"fbb69e0d-f6e5-460d-b998-af7085549720\") " pod="openshift-marketplace/redhat-operators-mjw7t" Dec 05 01:55:58 crc kubenswrapper[4990]: I1205 01:55:58.624945 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbb69e0d-f6e5-460d-b998-af7085549720-utilities\") pod \"redhat-operators-mjw7t\" (UID: \"fbb69e0d-f6e5-460d-b998-af7085549720\") " pod="openshift-marketplace/redhat-operators-mjw7t" Dec 05 01:55:58 crc kubenswrapper[4990]: I1205 01:55:58.624987 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbb69e0d-f6e5-460d-b998-af7085549720-catalog-content\") pod \"redhat-operators-mjw7t\" (UID: \"fbb69e0d-f6e5-460d-b998-af7085549720\") " pod="openshift-marketplace/redhat-operators-mjw7t" Dec 05 01:55:58 crc kubenswrapper[4990]: I1205 01:55:58.726163 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbb69e0d-f6e5-460d-b998-af7085549720-utilities\") pod \"redhat-operators-mjw7t\" (UID: \"fbb69e0d-f6e5-460d-b998-af7085549720\") " pod="openshift-marketplace/redhat-operators-mjw7t" Dec 05 01:55:58 crc kubenswrapper[4990]: I1205 01:55:58.726232 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbb69e0d-f6e5-460d-b998-af7085549720-catalog-content\") pod \"redhat-operators-mjw7t\" (UID: \"fbb69e0d-f6e5-460d-b998-af7085549720\") " pod="openshift-marketplace/redhat-operators-mjw7t" Dec 05 01:55:58 crc kubenswrapper[4990]: I1205 01:55:58.726279 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d4zf\" (UniqueName: \"kubernetes.io/projected/fbb69e0d-f6e5-460d-b998-af7085549720-kube-api-access-6d4zf\") pod \"redhat-operators-mjw7t\" (UID: \"fbb69e0d-f6e5-460d-b998-af7085549720\") " pod="openshift-marketplace/redhat-operators-mjw7t" Dec 05 01:55:58 crc kubenswrapper[4990]: I1205 01:55:58.726755 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbb69e0d-f6e5-460d-b998-af7085549720-utilities\") pod \"redhat-operators-mjw7t\" (UID: \"fbb69e0d-f6e5-460d-b998-af7085549720\") " pod="openshift-marketplace/redhat-operators-mjw7t" Dec 05 01:55:58 crc kubenswrapper[4990]: I1205 01:55:58.726775 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbb69e0d-f6e5-460d-b998-af7085549720-catalog-content\") pod \"redhat-operators-mjw7t\" (UID: \"fbb69e0d-f6e5-460d-b998-af7085549720\") " pod="openshift-marketplace/redhat-operators-mjw7t" Dec 05 01:55:58 crc kubenswrapper[4990]: I1205 01:55:58.744062 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d4zf\" (UniqueName: \"kubernetes.io/projected/fbb69e0d-f6e5-460d-b998-af7085549720-kube-api-access-6d4zf\") pod \"redhat-operators-mjw7t\" (UID: \"fbb69e0d-f6e5-460d-b998-af7085549720\") " pod="openshift-marketplace/redhat-operators-mjw7t" Dec 05 01:55:58 crc kubenswrapper[4990]: I1205 01:55:58.835119 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mjw7t" Dec 05 01:55:59 crc kubenswrapper[4990]: I1205 01:55:59.258287 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mjw7t"] Dec 05 01:55:59 crc kubenswrapper[4990]: I1205 01:55:59.449376 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mjw7t" event={"ID":"fbb69e0d-f6e5-460d-b998-af7085549720","Type":"ContainerStarted","Data":"7361f07a3ac8f8225208412f67678851b41ff866d9835f382fdf3ef72d4e06f2"} Dec 05 01:56:00 crc kubenswrapper[4990]: I1205 01:56:00.463329 4990 generic.go:334] "Generic (PLEG): container finished" podID="fbb69e0d-f6e5-460d-b998-af7085549720" containerID="f965b19cabd23c446f8a5765f2792a55c61cba6763283c5f0c1cffb865992561" exitCode=0 Dec 05 01:56:00 crc kubenswrapper[4990]: I1205 01:56:00.463403 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mjw7t" event={"ID":"fbb69e0d-f6e5-460d-b998-af7085549720","Type":"ContainerDied","Data":"f965b19cabd23c446f8a5765f2792a55c61cba6763283c5f0c1cffb865992561"} Dec 05 01:56:01 crc kubenswrapper[4990]: I1205 01:56:01.473729 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mjw7t" event={"ID":"fbb69e0d-f6e5-460d-b998-af7085549720","Type":"ContainerStarted","Data":"f6942b744c04347f51dca9852d58e0f86ef280f653bdcbf15a93465c2e8a001e"} Dec 05 01:56:02 crc kubenswrapper[4990]: I1205 01:56:02.482087 4990 generic.go:334] "Generic (PLEG): container finished" podID="fbb69e0d-f6e5-460d-b998-af7085549720" containerID="f6942b744c04347f51dca9852d58e0f86ef280f653bdcbf15a93465c2e8a001e" exitCode=0 Dec 05 01:56:02 crc kubenswrapper[4990]: I1205 01:56:02.482162 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mjw7t" event={"ID":"fbb69e0d-f6e5-460d-b998-af7085549720","Type":"ContainerDied","Data":"f6942b744c04347f51dca9852d58e0f86ef280f653bdcbf15a93465c2e8a001e"} Dec 05 01:56:03 crc kubenswrapper[4990]: I1205 01:56:03.491068 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mjw7t" event={"ID":"fbb69e0d-f6e5-460d-b998-af7085549720","Type":"ContainerStarted","Data":"dacb7b2683b193716f1d44aa20b82e322b642fed98a24c9489ab867f0fc9a45a"} Dec 05 01:56:03 crc kubenswrapper[4990]: I1205 01:56:03.511160 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mjw7t" podStartSLOduration=3.049932302 podStartE2EDuration="5.51114128s" podCreationTimestamp="2025-12-05 01:55:58 +0000 UTC" firstStartedPulling="2025-12-05 01:56:00.466218249 +0000 UTC m=+2858.842433640" lastFinishedPulling="2025-12-05 01:56:02.927427257 +0000 UTC m=+2861.303642618" observedRunningTime="2025-12-05 01:56:03.509246626 +0000 UTC m=+2861.885461997" watchObservedRunningTime="2025-12-05 01:56:03.51114128 +0000 UTC m=+2861.887356641" Dec 05 01:56:08 crc kubenswrapper[4990]: I1205 01:56:08.835601 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mjw7t" Dec 05 01:56:08 crc kubenswrapper[4990]: I1205 01:56:08.835903 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mjw7t" Dec 05 01:56:09 crc kubenswrapper[4990]: I1205 01:56:09.874654 4990 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mjw7t" podUID="fbb69e0d-f6e5-460d-b998-af7085549720" containerName="registry-server" probeResult="failure" output=< Dec 05 01:56:09 crc kubenswrapper[4990]: timeout: failed to connect service ":50051" within 1s Dec 05 01:56:09 crc kubenswrapper[4990]: > Dec 05 01:56:18 crc kubenswrapper[4990]: I1205 01:56:18.778205 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jnx58"] Dec 05 01:56:18 crc kubenswrapper[4990]: I1205 01:56:18.782059 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jnx58" Dec 05 01:56:18 crc kubenswrapper[4990]: I1205 01:56:18.797276 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jnx58"] Dec 05 01:56:18 crc kubenswrapper[4990]: I1205 01:56:18.887955 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mjw7t" Dec 05 01:56:18 crc kubenswrapper[4990]: I1205 01:56:18.930313 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mjw7t" Dec 05 01:56:18 crc kubenswrapper[4990]: I1205 01:56:18.937289 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v95qx\" (UniqueName: \"kubernetes.io/projected/3f86bcb1-d24a-40ba-98ee-21972b0c3536-kube-api-access-v95qx\") pod \"certified-operators-jnx58\" (UID: \"3f86bcb1-d24a-40ba-98ee-21972b0c3536\") " pod="openshift-marketplace/certified-operators-jnx58" Dec 05 01:56:18 crc kubenswrapper[4990]: I1205 01:56:18.937359 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f86bcb1-d24a-40ba-98ee-21972b0c3536-utilities\") pod \"certified-operators-jnx58\" (UID: \"3f86bcb1-d24a-40ba-98ee-21972b0c3536\") " pod="openshift-marketplace/certified-operators-jnx58" Dec 05 01:56:18 crc kubenswrapper[4990]: I1205 01:56:18.937382 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f86bcb1-d24a-40ba-98ee-21972b0c3536-catalog-content\") pod \"certified-operators-jnx58\" (UID: \"3f86bcb1-d24a-40ba-98ee-21972b0c3536\") " pod="openshift-marketplace/certified-operators-jnx58" Dec 05 01:56:19 crc kubenswrapper[4990]: I1205 01:56:19.038543 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f86bcb1-d24a-40ba-98ee-21972b0c3536-utilities\") pod \"certified-operators-jnx58\" (UID: \"3f86bcb1-d24a-40ba-98ee-21972b0c3536\") " pod="openshift-marketplace/certified-operators-jnx58" Dec 05 01:56:19 crc kubenswrapper[4990]: I1205 01:56:19.038592 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f86bcb1-d24a-40ba-98ee-21972b0c3536-catalog-content\") pod \"certified-operators-jnx58\" (UID: \"3f86bcb1-d24a-40ba-98ee-21972b0c3536\") " pod="openshift-marketplace/certified-operators-jnx58" Dec 05 01:56:19 crc kubenswrapper[4990]: I1205 01:56:19.038735 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v95qx\" (UniqueName: \"kubernetes.io/projected/3f86bcb1-d24a-40ba-98ee-21972b0c3536-kube-api-access-v95qx\") pod \"certified-operators-jnx58\" (UID: \"3f86bcb1-d24a-40ba-98ee-21972b0c3536\") " pod="openshift-marketplace/certified-operators-jnx58" Dec 05 01:56:19 crc kubenswrapper[4990]: I1205 01:56:19.039143 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f86bcb1-d24a-40ba-98ee-21972b0c3536-utilities\") pod \"certified-operators-jnx58\" (UID: \"3f86bcb1-d24a-40ba-98ee-21972b0c3536\") " pod="openshift-marketplace/certified-operators-jnx58" Dec 05 01:56:19 crc kubenswrapper[4990]: I1205 01:56:19.039399 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f86bcb1-d24a-40ba-98ee-21972b0c3536-catalog-content\") pod \"certified-operators-jnx58\" (UID: \"3f86bcb1-d24a-40ba-98ee-21972b0c3536\") " pod="openshift-marketplace/certified-operators-jnx58" Dec 05 01:56:19 crc kubenswrapper[4990]: I1205 01:56:19.069946 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v95qx\" (UniqueName: \"kubernetes.io/projected/3f86bcb1-d24a-40ba-98ee-21972b0c3536-kube-api-access-v95qx\") pod \"certified-operators-jnx58\" (UID: \"3f86bcb1-d24a-40ba-98ee-21972b0c3536\") " pod="openshift-marketplace/certified-operators-jnx58" Dec 05 01:56:19 crc kubenswrapper[4990]: I1205 01:56:19.109129 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jnx58" Dec 05 01:56:19 crc kubenswrapper[4990]: I1205 01:56:19.370209 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jnx58"] Dec 05 01:56:19 crc kubenswrapper[4990]: W1205 01:56:19.376830 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f86bcb1_d24a_40ba_98ee_21972b0c3536.slice/crio-3b54541431c1e9cb0db3df3f778db23acb83c0ee3ea89ab61ec745c22e5060bc WatchSource:0}: Error finding container 3b54541431c1e9cb0db3df3f778db23acb83c0ee3ea89ab61ec745c22e5060bc: Status 404 returned error can't find the container with id 3b54541431c1e9cb0db3df3f778db23acb83c0ee3ea89ab61ec745c22e5060bc Dec 05 01:56:19 crc kubenswrapper[4990]: I1205 01:56:19.636768 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnx58" event={"ID":"3f86bcb1-d24a-40ba-98ee-21972b0c3536","Type":"ContainerStarted","Data":"9a12fe4fb93b1279bc19c5badc7e0468ed779fb74043e5deb5e28250dba1b2d9"} Dec 05 01:56:19 crc kubenswrapper[4990]: I1205 01:56:19.636819 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnx58" event={"ID":"3f86bcb1-d24a-40ba-98ee-21972b0c3536","Type":"ContainerStarted","Data":"3b54541431c1e9cb0db3df3f778db23acb83c0ee3ea89ab61ec745c22e5060bc"} Dec 05 01:56:20 crc kubenswrapper[4990]: I1205 01:56:20.650018 4990 generic.go:334] "Generic (PLEG): container finished" podID="3f86bcb1-d24a-40ba-98ee-21972b0c3536" containerID="9a12fe4fb93b1279bc19c5badc7e0468ed779fb74043e5deb5e28250dba1b2d9" exitCode=0 Dec 05 01:56:20 crc kubenswrapper[4990]: I1205 01:56:20.650175 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnx58" event={"ID":"3f86bcb1-d24a-40ba-98ee-21972b0c3536","Type":"ContainerDied","Data":"9a12fe4fb93b1279bc19c5badc7e0468ed779fb74043e5deb5e28250dba1b2d9"} Dec 05 01:56:21 crc kubenswrapper[4990]: I1205 01:56:21.146377 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mjw7t"] Dec 05 01:56:21 crc kubenswrapper[4990]: I1205 01:56:21.146633 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mjw7t" podUID="fbb69e0d-f6e5-460d-b998-af7085549720" containerName="registry-server" containerID="cri-o://dacb7b2683b193716f1d44aa20b82e322b642fed98a24c9489ab867f0fc9a45a" gracePeriod=2 Dec 05 01:56:21 crc kubenswrapper[4990]: I1205 01:56:21.576851 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mjw7t" Dec 05 01:56:21 crc kubenswrapper[4990]: I1205 01:56:21.659747 4990 generic.go:334] "Generic (PLEG): container finished" podID="3f86bcb1-d24a-40ba-98ee-21972b0c3536" containerID="37020eda01018a6ad69c1218e9b3a72b2d7844bdbfa2afc651fae81e6931e54c" exitCode=0 Dec 05 01:56:21 crc kubenswrapper[4990]: I1205 01:56:21.659793 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnx58" event={"ID":"3f86bcb1-d24a-40ba-98ee-21972b0c3536","Type":"ContainerDied","Data":"37020eda01018a6ad69c1218e9b3a72b2d7844bdbfa2afc651fae81e6931e54c"} Dec 05 01:56:21 crc kubenswrapper[4990]: I1205 01:56:21.664877 4990 generic.go:334] "Generic (PLEG): container finished" podID="fbb69e0d-f6e5-460d-b998-af7085549720" containerID="dacb7b2683b193716f1d44aa20b82e322b642fed98a24c9489ab867f0fc9a45a" exitCode=0 Dec 05 01:56:21 crc kubenswrapper[4990]: I1205 01:56:21.664921 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mjw7t" event={"ID":"fbb69e0d-f6e5-460d-b998-af7085549720","Type":"ContainerDied","Data":"dacb7b2683b193716f1d44aa20b82e322b642fed98a24c9489ab867f0fc9a45a"} Dec 05 01:56:21 crc kubenswrapper[4990]: I1205 01:56:21.664939 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mjw7t" Dec 05 01:56:21 crc kubenswrapper[4990]: I1205 01:56:21.664961 4990 scope.go:117] "RemoveContainer" containerID="dacb7b2683b193716f1d44aa20b82e322b642fed98a24c9489ab867f0fc9a45a" Dec 05 01:56:21 crc kubenswrapper[4990]: I1205 01:56:21.664949 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mjw7t" event={"ID":"fbb69e0d-f6e5-460d-b998-af7085549720","Type":"ContainerDied","Data":"7361f07a3ac8f8225208412f67678851b41ff866d9835f382fdf3ef72d4e06f2"} Dec 05 01:56:21 crc kubenswrapper[4990]: I1205 01:56:21.679110 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbb69e0d-f6e5-460d-b998-af7085549720-utilities\") pod \"fbb69e0d-f6e5-460d-b998-af7085549720\" (UID: \"fbb69e0d-f6e5-460d-b998-af7085549720\") " Dec 05 01:56:21 crc kubenswrapper[4990]: I1205 01:56:21.679251 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbb69e0d-f6e5-460d-b998-af7085549720-catalog-content\") pod \"fbb69e0d-f6e5-460d-b998-af7085549720\" (UID: \"fbb69e0d-f6e5-460d-b998-af7085549720\") " Dec 05 01:56:21 crc kubenswrapper[4990]: I1205 01:56:21.679409 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6d4zf\" (UniqueName: \"kubernetes.io/projected/fbb69e0d-f6e5-460d-b998-af7085549720-kube-api-access-6d4zf\") pod \"fbb69e0d-f6e5-460d-b998-af7085549720\" (UID: \"fbb69e0d-f6e5-460d-b998-af7085549720\") " Dec 05 01:56:21 crc kubenswrapper[4990]: I1205 01:56:21.680994 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbb69e0d-f6e5-460d-b998-af7085549720-utilities" (OuterVolumeSpecName: "utilities") pod "fbb69e0d-f6e5-460d-b998-af7085549720" (UID: "fbb69e0d-f6e5-460d-b998-af7085549720"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:56:21 crc kubenswrapper[4990]: I1205 01:56:21.687222 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbb69e0d-f6e5-460d-b998-af7085549720-kube-api-access-6d4zf" (OuterVolumeSpecName: "kube-api-access-6d4zf") pod "fbb69e0d-f6e5-460d-b998-af7085549720" (UID: "fbb69e0d-f6e5-460d-b998-af7085549720"). InnerVolumeSpecName "kube-api-access-6d4zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:56:21 crc kubenswrapper[4990]: I1205 01:56:21.690298 4990 scope.go:117] "RemoveContainer" containerID="f6942b744c04347f51dca9852d58e0f86ef280f653bdcbf15a93465c2e8a001e" Dec 05 01:56:21 crc kubenswrapper[4990]: I1205 01:56:21.734868 4990 scope.go:117] "RemoveContainer" containerID="f965b19cabd23c446f8a5765f2792a55c61cba6763283c5f0c1cffb865992561" Dec 05 01:56:21 crc kubenswrapper[4990]: I1205 01:56:21.758269 4990 scope.go:117] "RemoveContainer" containerID="dacb7b2683b193716f1d44aa20b82e322b642fed98a24c9489ab867f0fc9a45a" Dec 05 01:56:21 crc kubenswrapper[4990]: E1205 01:56:21.758710 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dacb7b2683b193716f1d44aa20b82e322b642fed98a24c9489ab867f0fc9a45a\": container with ID starting with dacb7b2683b193716f1d44aa20b82e322b642fed98a24c9489ab867f0fc9a45a not found: ID does not exist" containerID="dacb7b2683b193716f1d44aa20b82e322b642fed98a24c9489ab867f0fc9a45a" Dec 05 01:56:21 crc kubenswrapper[4990]: I1205 01:56:21.758745 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dacb7b2683b193716f1d44aa20b82e322b642fed98a24c9489ab867f0fc9a45a"} err="failed to get container status \"dacb7b2683b193716f1d44aa20b82e322b642fed98a24c9489ab867f0fc9a45a\": rpc error: code = NotFound desc = could not find container \"dacb7b2683b193716f1d44aa20b82e322b642fed98a24c9489ab867f0fc9a45a\": container with ID starting with dacb7b2683b193716f1d44aa20b82e322b642fed98a24c9489ab867f0fc9a45a not found: ID does not exist" Dec 05 01:56:21 crc kubenswrapper[4990]: I1205 01:56:21.758772 4990 scope.go:117] "RemoveContainer" containerID="f6942b744c04347f51dca9852d58e0f86ef280f653bdcbf15a93465c2e8a001e" Dec 05 01:56:21 crc kubenswrapper[4990]: E1205 01:56:21.758994 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6942b744c04347f51dca9852d58e0f86ef280f653bdcbf15a93465c2e8a001e\": container with ID starting with f6942b744c04347f51dca9852d58e0f86ef280f653bdcbf15a93465c2e8a001e not found: ID does not exist" containerID="f6942b744c04347f51dca9852d58e0f86ef280f653bdcbf15a93465c2e8a001e" Dec 05 01:56:21 crc kubenswrapper[4990]: I1205 01:56:21.759022 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6942b744c04347f51dca9852d58e0f86ef280f653bdcbf15a93465c2e8a001e"} err="failed to get container status \"f6942b744c04347f51dca9852d58e0f86ef280f653bdcbf15a93465c2e8a001e\": rpc error: code = NotFound desc = could not find container \"f6942b744c04347f51dca9852d58e0f86ef280f653bdcbf15a93465c2e8a001e\": container with ID starting with f6942b744c04347f51dca9852d58e0f86ef280f653bdcbf15a93465c2e8a001e not found: ID does not exist" Dec 05 01:56:21 crc kubenswrapper[4990]: I1205 01:56:21.759039 4990 scope.go:117] "RemoveContainer" containerID="f965b19cabd23c446f8a5765f2792a55c61cba6763283c5f0c1cffb865992561" Dec 05 01:56:21 crc kubenswrapper[4990]: E1205 01:56:21.759219 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f965b19cabd23c446f8a5765f2792a55c61cba6763283c5f0c1cffb865992561\": container with ID starting with f965b19cabd23c446f8a5765f2792a55c61cba6763283c5f0c1cffb865992561 not found: ID does not exist" containerID="f965b19cabd23c446f8a5765f2792a55c61cba6763283c5f0c1cffb865992561" Dec 05 01:56:21 crc kubenswrapper[4990]: I1205 01:56:21.759239 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f965b19cabd23c446f8a5765f2792a55c61cba6763283c5f0c1cffb865992561"} err="failed to get container status \"f965b19cabd23c446f8a5765f2792a55c61cba6763283c5f0c1cffb865992561\": rpc error: code = NotFound desc = could not find container \"f965b19cabd23c446f8a5765f2792a55c61cba6763283c5f0c1cffb865992561\": container with ID starting with f965b19cabd23c446f8a5765f2792a55c61cba6763283c5f0c1cffb865992561 not found: ID does not exist" Dec 05 01:56:21 crc kubenswrapper[4990]: I1205 01:56:21.781873 4990 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbb69e0d-f6e5-460d-b998-af7085549720-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 01:56:21 crc kubenswrapper[4990]: I1205 01:56:21.781912 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6d4zf\" (UniqueName: \"kubernetes.io/projected/fbb69e0d-f6e5-460d-b998-af7085549720-kube-api-access-6d4zf\") on node \"crc\" DevicePath \"\"" Dec 05 01:56:21 crc kubenswrapper[4990]: I1205 01:56:21.809836 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbb69e0d-f6e5-460d-b998-af7085549720-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fbb69e0d-f6e5-460d-b998-af7085549720" (UID: "fbb69e0d-f6e5-460d-b998-af7085549720"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:56:21 crc kubenswrapper[4990]: I1205 01:56:21.823912 4990 patch_prober.go:28] interesting pod/machine-config-daemon-zxlh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 01:56:21 crc kubenswrapper[4990]: I1205 01:56:21.823965 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 01:56:21 crc kubenswrapper[4990]: I1205 01:56:21.883244 4990 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbb69e0d-f6e5-460d-b998-af7085549720-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 01:56:21 crc kubenswrapper[4990]: I1205 01:56:21.995326 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mjw7t"] Dec 05 01:56:22 crc kubenswrapper[4990]: I1205 01:56:22.004872 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mjw7t"] Dec 05 01:56:22 crc kubenswrapper[4990]: I1205 01:56:22.675148 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnx58" event={"ID":"3f86bcb1-d24a-40ba-98ee-21972b0c3536","Type":"ContainerStarted","Data":"da5e8105ab6b758d1c12bea1f04cda4e5b78a24d6cb57e6b029dfd07d37e2e84"} Dec 05 01:56:22 crc kubenswrapper[4990]: I1205 01:56:22.697799 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jnx58" podStartSLOduration=3.274102556 podStartE2EDuration="4.697772876s" podCreationTimestamp="2025-12-05 01:56:18 +0000 UTC" firstStartedPulling="2025-12-05 01:56:20.651738 +0000 UTC m=+2879.027953371" lastFinishedPulling="2025-12-05 01:56:22.0754083 +0000 UTC m=+2880.451623691" observedRunningTime="2025-12-05 01:56:22.693609808 +0000 UTC m=+2881.069825169" watchObservedRunningTime="2025-12-05 01:56:22.697772876 +0000 UTC m=+2881.073988257" Dec 05 01:56:23 crc kubenswrapper[4990]: I1205 01:56:23.940559 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbb69e0d-f6e5-460d-b998-af7085549720" path="/var/lib/kubelet/pods/fbb69e0d-f6e5-460d-b998-af7085549720/volumes" Dec 05 01:56:29 crc kubenswrapper[4990]: I1205 01:56:29.110310 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jnx58" Dec 05 01:56:29 crc kubenswrapper[4990]: I1205 01:56:29.111038 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jnx58" Dec 05 01:56:29 crc kubenswrapper[4990]: I1205 01:56:29.181169 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jnx58" Dec 05 01:56:29 crc kubenswrapper[4990]: I1205 01:56:29.808567 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jnx58" Dec 05 01:56:29 crc kubenswrapper[4990]: I1205 01:56:29.877110 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jnx58"] Dec 05 01:56:31 crc kubenswrapper[4990]: I1205 01:56:31.752955 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jnx58" podUID="3f86bcb1-d24a-40ba-98ee-21972b0c3536" containerName="registry-server" containerID="cri-o://da5e8105ab6b758d1c12bea1f04cda4e5b78a24d6cb57e6b029dfd07d37e2e84" gracePeriod=2 Dec 05 01:56:32 crc kubenswrapper[4990]: I1205 01:56:32.763978 4990 generic.go:334] "Generic (PLEG): container finished" podID="3f86bcb1-d24a-40ba-98ee-21972b0c3536" containerID="da5e8105ab6b758d1c12bea1f04cda4e5b78a24d6cb57e6b029dfd07d37e2e84" exitCode=0 Dec 05 01:56:32 crc kubenswrapper[4990]: I1205 01:56:32.764021 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnx58" event={"ID":"3f86bcb1-d24a-40ba-98ee-21972b0c3536","Type":"ContainerDied","Data":"da5e8105ab6b758d1c12bea1f04cda4e5b78a24d6cb57e6b029dfd07d37e2e84"} Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.291627 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jnx58" Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.360106 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v95qx\" (UniqueName: \"kubernetes.io/projected/3f86bcb1-d24a-40ba-98ee-21972b0c3536-kube-api-access-v95qx\") pod \"3f86bcb1-d24a-40ba-98ee-21972b0c3536\" (UID: \"3f86bcb1-d24a-40ba-98ee-21972b0c3536\") " Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.360210 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f86bcb1-d24a-40ba-98ee-21972b0c3536-catalog-content\") pod \"3f86bcb1-d24a-40ba-98ee-21972b0c3536\" (UID: \"3f86bcb1-d24a-40ba-98ee-21972b0c3536\") " Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.360263 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f86bcb1-d24a-40ba-98ee-21972b0c3536-utilities\") pod \"3f86bcb1-d24a-40ba-98ee-21972b0c3536\" (UID: \"3f86bcb1-d24a-40ba-98ee-21972b0c3536\") " Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.361296 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f86bcb1-d24a-40ba-98ee-21972b0c3536-utilities" (OuterVolumeSpecName: "utilities") pod "3f86bcb1-d24a-40ba-98ee-21972b0c3536" (UID: "3f86bcb1-d24a-40ba-98ee-21972b0c3536"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.371002 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f86bcb1-d24a-40ba-98ee-21972b0c3536-kube-api-access-v95qx" (OuterVolumeSpecName: "kube-api-access-v95qx") pod "3f86bcb1-d24a-40ba-98ee-21972b0c3536" (UID: "3f86bcb1-d24a-40ba-98ee-21972b0c3536"). InnerVolumeSpecName "kube-api-access-v95qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.419936 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f86bcb1-d24a-40ba-98ee-21972b0c3536-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f86bcb1-d24a-40ba-98ee-21972b0c3536" (UID: "3f86bcb1-d24a-40ba-98ee-21972b0c3536"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.462763 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v95qx\" (UniqueName: \"kubernetes.io/projected/3f86bcb1-d24a-40ba-98ee-21972b0c3536-kube-api-access-v95qx\") on node \"crc\" DevicePath \"\"" Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.462802 4990 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f86bcb1-d24a-40ba-98ee-21972b0c3536-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.462859 4990 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f86bcb1-d24a-40ba-98ee-21972b0c3536-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.495782 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nlkvv"] Dec 05 01:56:33 crc kubenswrapper[4990]: E1205 01:56:33.496124 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f86bcb1-d24a-40ba-98ee-21972b0c3536" containerName="registry-server" Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.496143 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f86bcb1-d24a-40ba-98ee-21972b0c3536" containerName="registry-server" Dec 05 01:56:33 crc kubenswrapper[4990]: E1205 01:56:33.496159 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f86bcb1-d24a-40ba-98ee-21972b0c3536" containerName="extract-content" Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.496168 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f86bcb1-d24a-40ba-98ee-21972b0c3536" containerName="extract-content" Dec 05 01:56:33 crc kubenswrapper[4990]: E1205 01:56:33.496182 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbb69e0d-f6e5-460d-b998-af7085549720" containerName="registry-server" Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.496191 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbb69e0d-f6e5-460d-b998-af7085549720" containerName="registry-server" Dec 05 01:56:33 crc kubenswrapper[4990]: E1205 01:56:33.496205 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f86bcb1-d24a-40ba-98ee-21972b0c3536" containerName="extract-utilities" Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.496214 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f86bcb1-d24a-40ba-98ee-21972b0c3536" containerName="extract-utilities" Dec 05 01:56:33 crc kubenswrapper[4990]: E1205 01:56:33.496238 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbb69e0d-f6e5-460d-b998-af7085549720" containerName="extract-utilities" Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.496246 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbb69e0d-f6e5-460d-b998-af7085549720" containerName="extract-utilities" Dec 05 01:56:33 crc kubenswrapper[4990]: E1205 01:56:33.496265 4990 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbb69e0d-f6e5-460d-b998-af7085549720" containerName="extract-content" Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.496272 4990 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbb69e0d-f6e5-460d-b998-af7085549720" containerName="extract-content" Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.496426 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f86bcb1-d24a-40ba-98ee-21972b0c3536" containerName="registry-server" Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.496444 4990 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbb69e0d-f6e5-460d-b998-af7085549720" containerName="registry-server" Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.498001 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nlkvv" Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.517170 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nlkvv"] Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.564735 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d069ec-32c8-4bb9-b033-a2799166684b-catalog-content\") pod \"redhat-marketplace-nlkvv\" (UID: \"e8d069ec-32c8-4bb9-b033-a2799166684b\") " pod="openshift-marketplace/redhat-marketplace-nlkvv" Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.564805 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bwnk\" (UniqueName: \"kubernetes.io/projected/e8d069ec-32c8-4bb9-b033-a2799166684b-kube-api-access-4bwnk\") pod \"redhat-marketplace-nlkvv\" (UID: \"e8d069ec-32c8-4bb9-b033-a2799166684b\") " pod="openshift-marketplace/redhat-marketplace-nlkvv" Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.564903 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d069ec-32c8-4bb9-b033-a2799166684b-utilities\") pod \"redhat-marketplace-nlkvv\" (UID: \"e8d069ec-32c8-4bb9-b033-a2799166684b\") " pod="openshift-marketplace/redhat-marketplace-nlkvv" Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.666214 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d069ec-32c8-4bb9-b033-a2799166684b-utilities\") pod \"redhat-marketplace-nlkvv\" (UID: \"e8d069ec-32c8-4bb9-b033-a2799166684b\") " pod="openshift-marketplace/redhat-marketplace-nlkvv" Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.666342 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d069ec-32c8-4bb9-b033-a2799166684b-catalog-content\") pod \"redhat-marketplace-nlkvv\" (UID: \"e8d069ec-32c8-4bb9-b033-a2799166684b\") " pod="openshift-marketplace/redhat-marketplace-nlkvv" Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.666384 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bwnk\" (UniqueName: \"kubernetes.io/projected/e8d069ec-32c8-4bb9-b033-a2799166684b-kube-api-access-4bwnk\") pod \"redhat-marketplace-nlkvv\" (UID: \"e8d069ec-32c8-4bb9-b033-a2799166684b\") " pod="openshift-marketplace/redhat-marketplace-nlkvv" Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.667179 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d069ec-32c8-4bb9-b033-a2799166684b-catalog-content\") pod \"redhat-marketplace-nlkvv\" (UID: \"e8d069ec-32c8-4bb9-b033-a2799166684b\") " pod="openshift-marketplace/redhat-marketplace-nlkvv" Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.667249 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d069ec-32c8-4bb9-b033-a2799166684b-utilities\") pod \"redhat-marketplace-nlkvv\" (UID: \"e8d069ec-32c8-4bb9-b033-a2799166684b\") " pod="openshift-marketplace/redhat-marketplace-nlkvv" Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.685289 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bwnk\" (UniqueName: \"kubernetes.io/projected/e8d069ec-32c8-4bb9-b033-a2799166684b-kube-api-access-4bwnk\") pod \"redhat-marketplace-nlkvv\" (UID: \"e8d069ec-32c8-4bb9-b033-a2799166684b\") " pod="openshift-marketplace/redhat-marketplace-nlkvv" Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.772779 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnx58" event={"ID":"3f86bcb1-d24a-40ba-98ee-21972b0c3536","Type":"ContainerDied","Data":"3b54541431c1e9cb0db3df3f778db23acb83c0ee3ea89ab61ec745c22e5060bc"} Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.772837 4990 scope.go:117] "RemoveContainer" containerID="da5e8105ab6b758d1c12bea1f04cda4e5b78a24d6cb57e6b029dfd07d37e2e84" Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.772844 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jnx58" Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.804274 4990 scope.go:117] "RemoveContainer" containerID="37020eda01018a6ad69c1218e9b3a72b2d7844bdbfa2afc651fae81e6931e54c" Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.809093 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jnx58"] Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.814629 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jnx58"] Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.826383 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nlkvv" Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.835347 4990 scope.go:117] "RemoveContainer" containerID="9a12fe4fb93b1279bc19c5badc7e0468ed779fb74043e5deb5e28250dba1b2d9" Dec 05 01:56:33 crc kubenswrapper[4990]: I1205 01:56:33.944225 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f86bcb1-d24a-40ba-98ee-21972b0c3536" path="/var/lib/kubelet/pods/3f86bcb1-d24a-40ba-98ee-21972b0c3536/volumes" Dec 05 01:56:34 crc kubenswrapper[4990]: I1205 01:56:34.255075 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nlkvv"] Dec 05 01:56:34 crc kubenswrapper[4990]: I1205 01:56:34.782969 4990 generic.go:334] "Generic (PLEG): container finished" podID="e8d069ec-32c8-4bb9-b033-a2799166684b" containerID="db2d24cb0c74b02002abbbbd95cd6740ce90011f5e274160ea35827826ae43a1" exitCode=0 Dec 05 01:56:34 crc kubenswrapper[4990]: I1205 01:56:34.783058 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nlkvv" event={"ID":"e8d069ec-32c8-4bb9-b033-a2799166684b","Type":"ContainerDied","Data":"db2d24cb0c74b02002abbbbd95cd6740ce90011f5e274160ea35827826ae43a1"} Dec 05 01:56:34 crc kubenswrapper[4990]: I1205 01:56:34.783288 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nlkvv" event={"ID":"e8d069ec-32c8-4bb9-b033-a2799166684b","Type":"ContainerStarted","Data":"eaf213e8d62d73d8c3ca2071e568d1805307096fd53d9058221926dbcaaf5e8c"} Dec 05 01:56:35 crc kubenswrapper[4990]: I1205 01:56:35.688698 4990 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wb6zz"] Dec 05 01:56:35 crc kubenswrapper[4990]: I1205 01:56:35.690526 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wb6zz" Dec 05 01:56:35 crc kubenswrapper[4990]: I1205 01:56:35.700812 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wb6zz"] Dec 05 01:56:35 crc kubenswrapper[4990]: I1205 01:56:35.794376 4990 generic.go:334] "Generic (PLEG): container finished" podID="e8d069ec-32c8-4bb9-b033-a2799166684b" containerID="5d88a8a0bcfb301d63ad8c85389c26187360ad1829d27bead6b06ea9a7b210be" exitCode=0 Dec 05 01:56:35 crc kubenswrapper[4990]: I1205 01:56:35.794415 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nlkvv" event={"ID":"e8d069ec-32c8-4bb9-b033-a2799166684b","Type":"ContainerDied","Data":"5d88a8a0bcfb301d63ad8c85389c26187360ad1829d27bead6b06ea9a7b210be"} Dec 05 01:56:35 crc kubenswrapper[4990]: I1205 01:56:35.796936 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9e11d6e-940f-4529-ac0f-a12eba603627-catalog-content\") pod \"community-operators-wb6zz\" (UID: \"a9e11d6e-940f-4529-ac0f-a12eba603627\") " pod="openshift-marketplace/community-operators-wb6zz" Dec 05 01:56:35 crc kubenswrapper[4990]: I1205 01:56:35.797079 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9e11d6e-940f-4529-ac0f-a12eba603627-utilities\") pod \"community-operators-wb6zz\" (UID: \"a9e11d6e-940f-4529-ac0f-a12eba603627\") " pod="openshift-marketplace/community-operators-wb6zz" Dec 05 01:56:35 crc kubenswrapper[4990]: I1205 01:56:35.797196 4990 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wvv7\" (UniqueName: \"kubernetes.io/projected/a9e11d6e-940f-4529-ac0f-a12eba603627-kube-api-access-4wvv7\") pod \"community-operators-wb6zz\" (UID: \"a9e11d6e-940f-4529-ac0f-a12eba603627\") " pod="openshift-marketplace/community-operators-wb6zz" Dec 05 01:56:35 crc kubenswrapper[4990]: I1205 01:56:35.898318 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wvv7\" (UniqueName: \"kubernetes.io/projected/a9e11d6e-940f-4529-ac0f-a12eba603627-kube-api-access-4wvv7\") pod \"community-operators-wb6zz\" (UID: \"a9e11d6e-940f-4529-ac0f-a12eba603627\") " pod="openshift-marketplace/community-operators-wb6zz" Dec 05 01:56:35 crc kubenswrapper[4990]: I1205 01:56:35.898379 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9e11d6e-940f-4529-ac0f-a12eba603627-catalog-content\") pod \"community-operators-wb6zz\" (UID: \"a9e11d6e-940f-4529-ac0f-a12eba603627\") " pod="openshift-marketplace/community-operators-wb6zz" Dec 05 01:56:35 crc kubenswrapper[4990]: I1205 01:56:35.898396 4990 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9e11d6e-940f-4529-ac0f-a12eba603627-utilities\") pod \"community-operators-wb6zz\" (UID: \"a9e11d6e-940f-4529-ac0f-a12eba603627\") " pod="openshift-marketplace/community-operators-wb6zz" Dec 05 01:56:35 crc kubenswrapper[4990]: I1205 01:56:35.898866 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9e11d6e-940f-4529-ac0f-a12eba603627-utilities\") pod \"community-operators-wb6zz\" (UID: \"a9e11d6e-940f-4529-ac0f-a12eba603627\") " pod="openshift-marketplace/community-operators-wb6zz" Dec 05 01:56:35 crc kubenswrapper[4990]: I1205 01:56:35.898976 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9e11d6e-940f-4529-ac0f-a12eba603627-catalog-content\") pod \"community-operators-wb6zz\" (UID: \"a9e11d6e-940f-4529-ac0f-a12eba603627\") " pod="openshift-marketplace/community-operators-wb6zz" Dec 05 01:56:35 crc kubenswrapper[4990]: I1205 01:56:35.921921 4990 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wvv7\" (UniqueName: \"kubernetes.io/projected/a9e11d6e-940f-4529-ac0f-a12eba603627-kube-api-access-4wvv7\") pod \"community-operators-wb6zz\" (UID: \"a9e11d6e-940f-4529-ac0f-a12eba603627\") " pod="openshift-marketplace/community-operators-wb6zz" Dec 05 01:56:36 crc kubenswrapper[4990]: I1205 01:56:36.007523 4990 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wb6zz" Dec 05 01:56:36 crc kubenswrapper[4990]: I1205 01:56:36.472175 4990 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wb6zz"] Dec 05 01:56:36 crc kubenswrapper[4990]: W1205 01:56:36.474715 4990 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9e11d6e_940f_4529_ac0f_a12eba603627.slice/crio-badbfc0d57af14fce650b321c584fbc8fd5bfc5e64702b945a2d2aed6785877e WatchSource:0}: Error finding container badbfc0d57af14fce650b321c584fbc8fd5bfc5e64702b945a2d2aed6785877e: Status 404 returned error can't find the container with id badbfc0d57af14fce650b321c584fbc8fd5bfc5e64702b945a2d2aed6785877e Dec 05 01:56:36 crc kubenswrapper[4990]: I1205 01:56:36.807037 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb6zz" event={"ID":"a9e11d6e-940f-4529-ac0f-a12eba603627","Type":"ContainerStarted","Data":"badbfc0d57af14fce650b321c584fbc8fd5bfc5e64702b945a2d2aed6785877e"} Dec 05 01:56:38 crc kubenswrapper[4990]: I1205 01:56:38.825959 4990 generic.go:334] "Generic (PLEG): container finished" podID="a9e11d6e-940f-4529-ac0f-a12eba603627" containerID="1800911d038885b0de2ec782793cdf614c02b8923adf2f4407b98dad34ce68e7" exitCode=0 Dec 05 01:56:38 crc kubenswrapper[4990]: I1205 01:56:38.826026 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb6zz" event={"ID":"a9e11d6e-940f-4529-ac0f-a12eba603627","Type":"ContainerDied","Data":"1800911d038885b0de2ec782793cdf614c02b8923adf2f4407b98dad34ce68e7"} Dec 05 01:56:38 crc kubenswrapper[4990]: I1205 01:56:38.831944 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nlkvv" event={"ID":"e8d069ec-32c8-4bb9-b033-a2799166684b","Type":"ContainerStarted","Data":"6cecee41b0f6bffb4b41468670ec6ad65f3873dfddcccedfab5e6e14df3227ef"} Dec 05 01:56:38 crc kubenswrapper[4990]: I1205 01:56:38.869292 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nlkvv" podStartSLOduration=2.424245329 podStartE2EDuration="5.869269275s" podCreationTimestamp="2025-12-05 01:56:33 +0000 UTC" firstStartedPulling="2025-12-05 01:56:34.785441357 +0000 UTC m=+2893.161656728" lastFinishedPulling="2025-12-05 01:56:38.230465283 +0000 UTC m=+2896.606680674" observedRunningTime="2025-12-05 01:56:38.862300308 +0000 UTC m=+2897.238515729" watchObservedRunningTime="2025-12-05 01:56:38.869269275 +0000 UTC m=+2897.245484656" Dec 05 01:56:42 crc kubenswrapper[4990]: I1205 01:56:42.868527 4990 generic.go:334] "Generic (PLEG): container finished" podID="a9e11d6e-940f-4529-ac0f-a12eba603627" containerID="124a5c8e2ce8a04621f0d79f55e7c9b4d8eef4f9f6afa9339386025565abc79e" exitCode=0 Dec 05 01:56:42 crc kubenswrapper[4990]: I1205 01:56:42.868604 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb6zz" event={"ID":"a9e11d6e-940f-4529-ac0f-a12eba603627","Type":"ContainerDied","Data":"124a5c8e2ce8a04621f0d79f55e7c9b4d8eef4f9f6afa9339386025565abc79e"} Dec 05 01:56:43 crc kubenswrapper[4990]: I1205 01:56:43.827516 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nlkvv" Dec 05 01:56:43 crc kubenswrapper[4990]: I1205 01:56:43.827893 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nlkvv" Dec 05 01:56:43 crc kubenswrapper[4990]: I1205 01:56:43.878278 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb6zz" event={"ID":"a9e11d6e-940f-4529-ac0f-a12eba603627","Type":"ContainerStarted","Data":"1b023f9d2ec049f3d1f064ab5e95f0e0cae9059f38073715e0a5f0b0c5871674"} Dec 05 01:56:43 crc kubenswrapper[4990]: I1205 01:56:43.888238 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nlkvv" Dec 05 01:56:43 crc kubenswrapper[4990]: I1205 01:56:43.894385 4990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wb6zz" podStartSLOduration=4.457007451 podStartE2EDuration="8.894361307s" podCreationTimestamp="2025-12-05 01:56:35 +0000 UTC" firstStartedPulling="2025-12-05 01:56:38.828596624 +0000 UTC m=+2897.204811995" lastFinishedPulling="2025-12-05 01:56:43.26595049 +0000 UTC m=+2901.642165851" observedRunningTime="2025-12-05 01:56:43.893373589 +0000 UTC m=+2902.269588950" watchObservedRunningTime="2025-12-05 01:56:43.894361307 +0000 UTC m=+2902.270576678" Dec 05 01:56:43 crc kubenswrapper[4990]: I1205 01:56:43.951732 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nlkvv" Dec 05 01:56:44 crc kubenswrapper[4990]: I1205 01:56:44.881041 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nlkvv"] Dec 05 01:56:45 crc kubenswrapper[4990]: I1205 01:56:45.894031 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nlkvv" podUID="e8d069ec-32c8-4bb9-b033-a2799166684b" containerName="registry-server" containerID="cri-o://6cecee41b0f6bffb4b41468670ec6ad65f3873dfddcccedfab5e6e14df3227ef" gracePeriod=2 Dec 05 01:56:46 crc kubenswrapper[4990]: I1205 01:56:46.008246 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wb6zz" Dec 05 01:56:46 crc kubenswrapper[4990]: I1205 01:56:46.008512 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wb6zz" Dec 05 01:56:46 crc kubenswrapper[4990]: I1205 01:56:46.045806 4990 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wb6zz" Dec 05 01:56:46 crc kubenswrapper[4990]: I1205 01:56:46.904767 4990 generic.go:334] "Generic (PLEG): container finished" podID="e8d069ec-32c8-4bb9-b033-a2799166684b" containerID="6cecee41b0f6bffb4b41468670ec6ad65f3873dfddcccedfab5e6e14df3227ef" exitCode=0 Dec 05 01:56:46 crc kubenswrapper[4990]: I1205 01:56:46.905666 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nlkvv" event={"ID":"e8d069ec-32c8-4bb9-b033-a2799166684b","Type":"ContainerDied","Data":"6cecee41b0f6bffb4b41468670ec6ad65f3873dfddcccedfab5e6e14df3227ef"} Dec 05 01:56:47 crc kubenswrapper[4990]: I1205 01:56:47.162968 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nlkvv" Dec 05 01:56:47 crc kubenswrapper[4990]: I1205 01:56:47.288072 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d069ec-32c8-4bb9-b033-a2799166684b-catalog-content\") pod \"e8d069ec-32c8-4bb9-b033-a2799166684b\" (UID: \"e8d069ec-32c8-4bb9-b033-a2799166684b\") " Dec 05 01:56:47 crc kubenswrapper[4990]: I1205 01:56:47.288127 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bwnk\" (UniqueName: \"kubernetes.io/projected/e8d069ec-32c8-4bb9-b033-a2799166684b-kube-api-access-4bwnk\") pod \"e8d069ec-32c8-4bb9-b033-a2799166684b\" (UID: \"e8d069ec-32c8-4bb9-b033-a2799166684b\") " Dec 05 01:56:47 crc kubenswrapper[4990]: I1205 01:56:47.288180 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d069ec-32c8-4bb9-b033-a2799166684b-utilities\") pod \"e8d069ec-32c8-4bb9-b033-a2799166684b\" (UID: \"e8d069ec-32c8-4bb9-b033-a2799166684b\") " Dec 05 01:56:47 crc kubenswrapper[4990]: I1205 01:56:47.289408 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8d069ec-32c8-4bb9-b033-a2799166684b-utilities" (OuterVolumeSpecName: "utilities") pod "e8d069ec-32c8-4bb9-b033-a2799166684b" (UID: "e8d069ec-32c8-4bb9-b033-a2799166684b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:56:47 crc kubenswrapper[4990]: I1205 01:56:47.296214 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8d069ec-32c8-4bb9-b033-a2799166684b-kube-api-access-4bwnk" (OuterVolumeSpecName: "kube-api-access-4bwnk") pod "e8d069ec-32c8-4bb9-b033-a2799166684b" (UID: "e8d069ec-32c8-4bb9-b033-a2799166684b"). InnerVolumeSpecName "kube-api-access-4bwnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:56:47 crc kubenswrapper[4990]: I1205 01:56:47.309271 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8d069ec-32c8-4bb9-b033-a2799166684b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8d069ec-32c8-4bb9-b033-a2799166684b" (UID: "e8d069ec-32c8-4bb9-b033-a2799166684b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:56:47 crc kubenswrapper[4990]: I1205 01:56:47.390014 4990 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d069ec-32c8-4bb9-b033-a2799166684b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 01:56:47 crc kubenswrapper[4990]: I1205 01:56:47.390051 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bwnk\" (UniqueName: \"kubernetes.io/projected/e8d069ec-32c8-4bb9-b033-a2799166684b-kube-api-access-4bwnk\") on node \"crc\" DevicePath \"\"" Dec 05 01:56:47 crc kubenswrapper[4990]: I1205 01:56:47.390066 4990 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d069ec-32c8-4bb9-b033-a2799166684b-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 01:56:47 crc kubenswrapper[4990]: I1205 01:56:47.918619 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nlkvv" event={"ID":"e8d069ec-32c8-4bb9-b033-a2799166684b","Type":"ContainerDied","Data":"eaf213e8d62d73d8c3ca2071e568d1805307096fd53d9058221926dbcaaf5e8c"} Dec 05 01:56:47 crc kubenswrapper[4990]: I1205 01:56:47.918932 4990 scope.go:117] "RemoveContainer" containerID="6cecee41b0f6bffb4b41468670ec6ad65f3873dfddcccedfab5e6e14df3227ef" Dec 05 01:56:47 crc kubenswrapper[4990]: I1205 01:56:47.918665 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nlkvv" Dec 05 01:56:47 crc kubenswrapper[4990]: I1205 01:56:47.949368 4990 scope.go:117] "RemoveContainer" containerID="5d88a8a0bcfb301d63ad8c85389c26187360ad1829d27bead6b06ea9a7b210be" Dec 05 01:56:47 crc kubenswrapper[4990]: I1205 01:56:47.972731 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nlkvv"] Dec 05 01:56:47 crc kubenswrapper[4990]: I1205 01:56:47.981471 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nlkvv"] Dec 05 01:56:47 crc kubenswrapper[4990]: I1205 01:56:47.984267 4990 scope.go:117] "RemoveContainer" containerID="db2d24cb0c74b02002abbbbd95cd6740ce90011f5e274160ea35827826ae43a1" Dec 05 01:56:48 crc kubenswrapper[4990]: E1205 01:56:48.115253 4990 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8d069ec_32c8_4bb9_b033_a2799166684b.slice/crio-eaf213e8d62d73d8c3ca2071e568d1805307096fd53d9058221926dbcaaf5e8c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8d069ec_32c8_4bb9_b033_a2799166684b.slice\": RecentStats: unable to find data in memory cache]" Dec 05 01:56:49 crc kubenswrapper[4990]: I1205 01:56:49.944289 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8d069ec-32c8-4bb9-b033-a2799166684b" path="/var/lib/kubelet/pods/e8d069ec-32c8-4bb9-b033-a2799166684b/volumes" Dec 05 01:56:51 crc kubenswrapper[4990]: I1205 01:56:51.823204 4990 patch_prober.go:28] interesting pod/machine-config-daemon-zxlh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 01:56:51 crc kubenswrapper[4990]: I1205 01:56:51.823277 4990 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 01:56:51 crc kubenswrapper[4990]: I1205 01:56:51.823327 4990 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" Dec 05 01:56:51 crc kubenswrapper[4990]: I1205 01:56:51.823955 4990 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f25f002653a5de7ace0b883c87587722f0570000f68e8d959c1c7196ab15b8c7"} pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 01:56:51 crc kubenswrapper[4990]: I1205 01:56:51.824043 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" podUID="b6580a04-67de-48f9-9da2-56cb4377af48" containerName="machine-config-daemon" containerID="cri-o://f25f002653a5de7ace0b883c87587722f0570000f68e8d959c1c7196ab15b8c7" gracePeriod=600 Dec 05 01:56:52 crc kubenswrapper[4990]: I1205 01:56:52.961975 4990 generic.go:334] "Generic (PLEG): container finished" podID="b6580a04-67de-48f9-9da2-56cb4377af48" containerID="f25f002653a5de7ace0b883c87587722f0570000f68e8d959c1c7196ab15b8c7" exitCode=0 Dec 05 01:56:52 crc kubenswrapper[4990]: I1205 01:56:52.962036 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" event={"ID":"b6580a04-67de-48f9-9da2-56cb4377af48","Type":"ContainerDied","Data":"f25f002653a5de7ace0b883c87587722f0570000f68e8d959c1c7196ab15b8c7"} Dec 05 01:56:52 crc kubenswrapper[4990]: I1205 01:56:52.962399 4990 scope.go:117] "RemoveContainer" containerID="80c7e485f9e3d44bd1a65b64ef3565f8b56151fc906368383324d650757be398" Dec 05 01:56:53 crc kubenswrapper[4990]: I1205 01:56:53.973206 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zxlh5" event={"ID":"b6580a04-67de-48f9-9da2-56cb4377af48","Type":"ContainerStarted","Data":"dae4539fc6aa5ae5ad5c66d6fe2e300377de05d09812bfdfc23f310f8726ddca"} Dec 05 01:56:56 crc kubenswrapper[4990]: I1205 01:56:56.076249 4990 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wb6zz" Dec 05 01:56:56 crc kubenswrapper[4990]: I1205 01:56:56.151017 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wb6zz"] Dec 05 01:56:57 crc kubenswrapper[4990]: I1205 01:56:57.005279 4990 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wb6zz" podUID="a9e11d6e-940f-4529-ac0f-a12eba603627" containerName="registry-server" containerID="cri-o://1b023f9d2ec049f3d1f064ab5e95f0e0cae9059f38073715e0a5f0b0c5871674" gracePeriod=2 Dec 05 01:56:57 crc kubenswrapper[4990]: I1205 01:56:57.486295 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wb6zz" Dec 05 01:56:57 crc kubenswrapper[4990]: I1205 01:56:57.642736 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9e11d6e-940f-4529-ac0f-a12eba603627-catalog-content\") pod \"a9e11d6e-940f-4529-ac0f-a12eba603627\" (UID: \"a9e11d6e-940f-4529-ac0f-a12eba603627\") " Dec 05 01:56:57 crc kubenswrapper[4990]: I1205 01:56:57.642902 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wvv7\" (UniqueName: \"kubernetes.io/projected/a9e11d6e-940f-4529-ac0f-a12eba603627-kube-api-access-4wvv7\") pod \"a9e11d6e-940f-4529-ac0f-a12eba603627\" (UID: \"a9e11d6e-940f-4529-ac0f-a12eba603627\") " Dec 05 01:56:57 crc kubenswrapper[4990]: I1205 01:56:57.642974 4990 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9e11d6e-940f-4529-ac0f-a12eba603627-utilities\") pod \"a9e11d6e-940f-4529-ac0f-a12eba603627\" (UID: \"a9e11d6e-940f-4529-ac0f-a12eba603627\") " Dec 05 01:56:57 crc kubenswrapper[4990]: I1205 01:56:57.644111 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9e11d6e-940f-4529-ac0f-a12eba603627-utilities" (OuterVolumeSpecName: "utilities") pod "a9e11d6e-940f-4529-ac0f-a12eba603627" (UID: "a9e11d6e-940f-4529-ac0f-a12eba603627"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:56:57 crc kubenswrapper[4990]: I1205 01:56:57.656608 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9e11d6e-940f-4529-ac0f-a12eba603627-kube-api-access-4wvv7" (OuterVolumeSpecName: "kube-api-access-4wvv7") pod "a9e11d6e-940f-4529-ac0f-a12eba603627" (UID: "a9e11d6e-940f-4529-ac0f-a12eba603627"). InnerVolumeSpecName "kube-api-access-4wvv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 01:56:57 crc kubenswrapper[4990]: I1205 01:56:57.709193 4990 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9e11d6e-940f-4529-ac0f-a12eba603627-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a9e11d6e-940f-4529-ac0f-a12eba603627" (UID: "a9e11d6e-940f-4529-ac0f-a12eba603627"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 01:56:57 crc kubenswrapper[4990]: I1205 01:56:57.744439 4990 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9e11d6e-940f-4529-ac0f-a12eba603627-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 01:56:57 crc kubenswrapper[4990]: I1205 01:56:57.744520 4990 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wvv7\" (UniqueName: \"kubernetes.io/projected/a9e11d6e-940f-4529-ac0f-a12eba603627-kube-api-access-4wvv7\") on node \"crc\" DevicePath \"\"" Dec 05 01:56:57 crc kubenswrapper[4990]: I1205 01:56:57.744537 4990 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9e11d6e-940f-4529-ac0f-a12eba603627-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 01:56:58 crc kubenswrapper[4990]: I1205 01:56:58.016877 4990 generic.go:334] "Generic (PLEG): container finished" podID="a9e11d6e-940f-4529-ac0f-a12eba603627" containerID="1b023f9d2ec049f3d1f064ab5e95f0e0cae9059f38073715e0a5f0b0c5871674" exitCode=0 Dec 05 01:56:58 crc kubenswrapper[4990]: I1205 01:56:58.016922 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb6zz" event={"ID":"a9e11d6e-940f-4529-ac0f-a12eba603627","Type":"ContainerDied","Data":"1b023f9d2ec049f3d1f064ab5e95f0e0cae9059f38073715e0a5f0b0c5871674"} Dec 05 01:56:58 crc kubenswrapper[4990]: I1205 01:56:58.016955 4990 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb6zz" event={"ID":"a9e11d6e-940f-4529-ac0f-a12eba603627","Type":"ContainerDied","Data":"badbfc0d57af14fce650b321c584fbc8fd5bfc5e64702b945a2d2aed6785877e"} Dec 05 01:56:58 crc kubenswrapper[4990]: I1205 01:56:58.016974 4990 scope.go:117] "RemoveContainer" containerID="1b023f9d2ec049f3d1f064ab5e95f0e0cae9059f38073715e0a5f0b0c5871674" Dec 05 01:56:58 crc kubenswrapper[4990]: I1205 01:56:58.017556 4990 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wb6zz" Dec 05 01:56:58 crc kubenswrapper[4990]: I1205 01:56:58.046116 4990 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wb6zz"] Dec 05 01:56:58 crc kubenswrapper[4990]: I1205 01:56:58.052082 4990 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wb6zz"] Dec 05 01:56:58 crc kubenswrapper[4990]: I1205 01:56:58.053661 4990 scope.go:117] "RemoveContainer" containerID="124a5c8e2ce8a04621f0d79f55e7c9b4d8eef4f9f6afa9339386025565abc79e" Dec 05 01:56:58 crc kubenswrapper[4990]: I1205 01:56:58.077283 4990 scope.go:117] "RemoveContainer" containerID="1800911d038885b0de2ec782793cdf614c02b8923adf2f4407b98dad34ce68e7" Dec 05 01:56:58 crc kubenswrapper[4990]: I1205 01:56:58.107015 4990 scope.go:117] "RemoveContainer" containerID="1b023f9d2ec049f3d1f064ab5e95f0e0cae9059f38073715e0a5f0b0c5871674" Dec 05 01:56:58 crc kubenswrapper[4990]: E1205 01:56:58.107680 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b023f9d2ec049f3d1f064ab5e95f0e0cae9059f38073715e0a5f0b0c5871674\": container with ID starting with 1b023f9d2ec049f3d1f064ab5e95f0e0cae9059f38073715e0a5f0b0c5871674 not found: ID does not exist" containerID="1b023f9d2ec049f3d1f064ab5e95f0e0cae9059f38073715e0a5f0b0c5871674" Dec 05 01:56:58 crc kubenswrapper[4990]: I1205 01:56:58.107725 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b023f9d2ec049f3d1f064ab5e95f0e0cae9059f38073715e0a5f0b0c5871674"} err="failed to get container status \"1b023f9d2ec049f3d1f064ab5e95f0e0cae9059f38073715e0a5f0b0c5871674\": rpc error: code = NotFound desc = could not find container \"1b023f9d2ec049f3d1f064ab5e95f0e0cae9059f38073715e0a5f0b0c5871674\": container with ID starting with 1b023f9d2ec049f3d1f064ab5e95f0e0cae9059f38073715e0a5f0b0c5871674 not found: ID does not exist" Dec 05 01:56:58 crc kubenswrapper[4990]: I1205 01:56:58.107756 4990 scope.go:117] "RemoveContainer" containerID="124a5c8e2ce8a04621f0d79f55e7c9b4d8eef4f9f6afa9339386025565abc79e" Dec 05 01:56:58 crc kubenswrapper[4990]: E1205 01:56:58.108075 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"124a5c8e2ce8a04621f0d79f55e7c9b4d8eef4f9f6afa9339386025565abc79e\": container with ID starting with 124a5c8e2ce8a04621f0d79f55e7c9b4d8eef4f9f6afa9339386025565abc79e not found: ID does not exist" containerID="124a5c8e2ce8a04621f0d79f55e7c9b4d8eef4f9f6afa9339386025565abc79e" Dec 05 01:56:58 crc kubenswrapper[4990]: I1205 01:56:58.108113 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"124a5c8e2ce8a04621f0d79f55e7c9b4d8eef4f9f6afa9339386025565abc79e"} err="failed to get container status \"124a5c8e2ce8a04621f0d79f55e7c9b4d8eef4f9f6afa9339386025565abc79e\": rpc error: code = NotFound desc = could not find container \"124a5c8e2ce8a04621f0d79f55e7c9b4d8eef4f9f6afa9339386025565abc79e\": container with ID starting with 124a5c8e2ce8a04621f0d79f55e7c9b4d8eef4f9f6afa9339386025565abc79e not found: ID does not exist" Dec 05 01:56:58 crc kubenswrapper[4990]: I1205 01:56:58.108138 4990 scope.go:117] "RemoveContainer" containerID="1800911d038885b0de2ec782793cdf614c02b8923adf2f4407b98dad34ce68e7" Dec 05 01:56:58 crc kubenswrapper[4990]: E1205 01:56:58.108357 4990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1800911d038885b0de2ec782793cdf614c02b8923adf2f4407b98dad34ce68e7\": container with ID starting with 1800911d038885b0de2ec782793cdf614c02b8923adf2f4407b98dad34ce68e7 not found: ID does not exist" containerID="1800911d038885b0de2ec782793cdf614c02b8923adf2f4407b98dad34ce68e7" Dec 05 01:56:58 crc kubenswrapper[4990]: I1205 01:56:58.108385 4990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1800911d038885b0de2ec782793cdf614c02b8923adf2f4407b98dad34ce68e7"} err="failed to get container status \"1800911d038885b0de2ec782793cdf614c02b8923adf2f4407b98dad34ce68e7\": rpc error: code = NotFound desc = could not find container \"1800911d038885b0de2ec782793cdf614c02b8923adf2f4407b98dad34ce68e7\": container with ID starting with 1800911d038885b0de2ec782793cdf614c02b8923adf2f4407b98dad34ce68e7 not found: ID does not exist" Dec 05 01:56:59 crc kubenswrapper[4990]: I1205 01:56:59.946770 4990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9e11d6e-940f-4529-ac0f-a12eba603627" path="/var/lib/kubelet/pods/a9e11d6e-940f-4529-ac0f-a12eba603627/volumes"